Close Menu
Karachi Chronicle
  • Home
  • AI
  • Business
  • Entertainment
  • Fashion
  • Politics
  • Sports
  • Tech
  • World

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Chinese researchers release the world’s first fully automated AI-based processor chip design system

Today’s Earthquake: Massive Trembling 3.8 Jorz Afghanistan

Do you have three years of citizenship? The new transition in Germany, Visa Freeze rules explained |

Facebook X (Twitter) Instagram
  • Home
  • About us
  • Advertise
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram Pinterest Vimeo
Karachi Chronicle
  • Home
  • AI
  • Business
  • Entertainment
  • Fashion
  • Politics
  • Sports
  • Tech
  • World
Karachi Chronicle
You are at:Home » Human misuse makes artificial intelligence even more dangerous
AI

Human misuse makes artificial intelligence even more dangerous

Adnan MaharBy Adnan MaharDecember 13, 2024No Comments4 Mins Read2 Views
Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
Share
Facebook Twitter LinkedIn Pinterest WhatsApp Email


OpenAI CEO Sam Altman expects artificial general intelligence (AGI), or AI that can outperform humans at most tasks, to arrive around 2027 or 2028. Elon Musk’s prediction is for either 2025 or 2026, and he claims he is “losing sleep over the threat of AI dangers.” . ”Such predictions are wrong. As the limitations of current AI become increasingly apparent, most AI researchers have come to the view that simply building bigger and more powerful chatbots will not lead to AGI.

But even in 2025, AI will still pose significant risks. It is not due to artificial superintelligence, but due to human misuse.

These could be unintended abuses, such as lawyers becoming overly reliant on AI. For example, after the release of ChatGPT, many lawyers were sanctioned for using AI to generate false courtroom briefings, apparently unaware of chatbots’ tendency to fabricate them. In British Columbia, lawyer Chong Ke was ordered to pay costs to opposing counsel for including a fictitious AI-generated case in a legal submission. In New York, Stephen Schwartz and Peter Loduca were fined $5,000 for false citations. In Colorado, Zakaria Krabill was suspended for one year for using a fictitious case law generated using ChatGPT and blaming the error on a “legal intern.” The list is growing rapidly.

Other misuses are intentional. In January 2024, sexually explicit deepfakes of Taylor Swift flooded social media platforms. These images were created using Microsoft’s AI tool Designer. The company had guardrails to avoid producing images of real people, but the misspelling of Swift’s name was enough to circumvent the guardrails. Microsoft has since fixed this error. But Taylor Swift is just the tip of the iceberg, and non-consensual deepfakes are widespread. One reason for this is the availability of open source tools for creating deepfakes. Legislation currently underway around the world aims to combat deepfakes in hopes of limiting the damage. Whether it’s effective remains to be seen.

In 2025, it will be even harder to tell the real from the fake. The fidelity of AI-generated audio, text, and images is incredible, and then comes video. This could lead to a “liar’s dividend.” This means that people in positions of power will deny evidence of their own wrongdoing, claiming it is false. In 2023, Tesla claimed that Elon Musk’s 2016 video may have been a deepfake to counter allegations that the CEO exaggerated the safety of Tesla’s Autopilot system and caused an accident. . An Indian politician claimed that an audio clip in which he acknowledged corruption in his party was doctored (the audio in at least one of his clips was confirmed as authentic by news outlets). Two defendants in the January 6th riot also claimed that the videos they appeared in were deepfakes. Both were convicted.

Meanwhile, companies are taking advantage of the social turmoil to market fundamentally questionable products under the label “AI.” If such tools are used to categorize people and make consequential decisions about them, this could go horribly wrong. For example, recruitment firm Retrio claims that its AI predicts candidates’ suitability for a job based on video interviews, but one study found that the system was unable to detect the presence of glasses or a plain background. It turns out that it is possible to be fooled simply by replacing “ with a bookshelf.” It relies on superficial correlations.

AI is also now being used to deprive people of important life opportunities in areas such as healthcare, education, finance, criminal justice, and insurance. In the Netherlands, the Dutch tax authorities used AI algorithms to identify people committing child welfare fraud. They falsely accused thousands of parents, often demanding repayments of tens of thousands of euros. In the aftermath, the prime minister and his entire cabinet resigned.

In 2025, AI risks will come not from AI acting on its own, but from what humans do with AI. This includes cases that appear to be working well but are overly dependent on them (attorneys using ChatGPT). When it works well but is misused (non-consensual deepfakes and lying dividends). And if it simply is not fit for purpose (if it denies people rights). Mitigating these risks is a major challenge for businesses, governments, and society. It’s hard enough without worrying about science fiction.



Source link

Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
Previous ArticlePrison Break reboot from ‘The Outlaws’ co-creator comes to Hulu
Next Article What can be done to stop DRC, a mysterious disease, and a deadly pathogen?
Adnan Mahar
  • Website

Adnan is a passionate doctor from Pakistan with a keen interest in exploring the world of politics, sports, and international affairs. As an avid reader and lifelong learner, he is deeply committed to sharing insights, perspectives, and thought-provoking ideas. His journey combines a love for knowledge with an analytical approach to current events, aiming to inspire meaningful conversations and broaden understanding across a wide range of topics.

Related Posts

Dig into Google Deepmind CEO “Shout Out” Chip Engineers and Openai CEO Sam Altman, Sundar Pichai responds with emojis

June 1, 2025

Google, Nvidia invests in AI startup Safe Superintelligence, co-founder of Openai Ilya Sutskever

April 14, 2025

This $30 billion AI startup can be very strange by a man who said that neural networks may already be aware of it

February 24, 2025
Leave A Reply Cancel Reply

Top Posts

President Trump’s SEC nominee Paul Atkins marries multi-billion dollar roof fortune

December 14, 2024100 Views

20 Most Anticipated Sex Movies of 2025

January 22, 202599 Views

Alice Munro’s Passive Voice | New Yorker

December 23, 202456 Views

How to tell the difference between fake and genuine Adidas Sambas

December 26, 202436 Views
Don't Miss
AI June 1, 2025

Dig into Google Deepmind CEO “Shout Out” Chip Engineers and Openai CEO Sam Altman, Sundar Pichai responds with emojis

Demis Hassabis, CEO of Google Deepmind, has expanded public approval to its chip engineers, highlighting…

Google, Nvidia invests in AI startup Safe Superintelligence, co-founder of Openai Ilya Sutskever

This $30 billion AI startup can be very strange by a man who said that neural networks may already be aware of it

As Deepseek and ChatGpt Surge, is Delhi behind?

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to Karachi Chronicle, your go-to source for the latest and most insightful updates across a range of topics that matter most in today’s fast-paced world. We are dedicated to delivering timely, accurate, and engaging content that covers a variety of subjects including Sports, Politics, World Affairs, Entertainment, and the ever-evolving field of Artificial Intelligence.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks

Chinese researchers release the world’s first fully automated AI-based processor chip design system

Today’s Earthquake: Massive Trembling 3.8 Jorz Afghanistan

Do you have three years of citizenship? The new transition in Germany, Visa Freeze rules explained |

Most Popular

ATUA AI (TUA) develops cutting-edge AI infrastructure to optimize distributed operations

October 11, 20020 Views

10 things you should never say to an AI chatbot

November 10, 20040 Views

Character.AI faces lawsuit over child safety concerns

December 12, 20050 Views
© 2025 karachichronicle. Designed by karachichronicle.
  • Home
  • About us
  • Advertise
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions

Type above and press Enter to search. Press Esc to cancel.