OpenAI CEO Sam Altman expects artificial general intelligence (AGI), or AI that can outperform humans at most tasks, to arrive around 2027 or 2028. Elon Musk’s prediction is for either 2025 or 2026, and he claims he is “losing sleep over the threat of AI dangers.” . ”Such predictions are wrong. As the limitations of current AI become increasingly apparent, most AI researchers have come to the view that simply building bigger and more powerful chatbots will not lead to AGI.
But even in 2025, AI will still pose significant risks. It is not due to artificial superintelligence, but due to human misuse.
These could be unintended abuses, such as lawyers becoming overly reliant on AI. For example, after the release of ChatGPT, many lawyers were sanctioned for using AI to generate false courtroom briefings, apparently unaware of chatbots’ tendency to fabricate them. In British Columbia, lawyer Chong Ke was ordered to pay costs to opposing counsel for including a fictitious AI-generated case in a legal submission. In New York, Stephen Schwartz and Peter Loduca were fined $5,000 for false citations. In Colorado, Zakaria Krabill was suspended for one year for using a fictitious case law generated using ChatGPT and blaming the error on a “legal intern.” The list is growing rapidly.
Other misuses are intentional. In January 2024, sexually explicit deepfakes of Taylor Swift flooded social media platforms. These images were created using Microsoft’s AI tool Designer. The company had guardrails to avoid producing images of real people, but the misspelling of Swift’s name was enough to circumvent the guardrails. Microsoft has since fixed this error. But Taylor Swift is just the tip of the iceberg, and non-consensual deepfakes are widespread. One reason for this is the availability of open source tools for creating deepfakes. Legislation currently underway around the world aims to combat deepfakes in hopes of limiting the damage. Whether it’s effective remains to be seen.
In 2025, it will be even harder to tell the real from the fake. The fidelity of AI-generated audio, text, and images is incredible, and then comes video. This could lead to a “liar’s dividend.” This means that people in positions of power will deny evidence of their own wrongdoing, claiming it is false. In 2023, Tesla claimed that Elon Musk’s 2016 video may have been a deepfake to counter allegations that the CEO exaggerated the safety of Tesla’s Autopilot system and caused an accident. . An Indian politician claimed that an audio clip in which he acknowledged corruption in his party was doctored (the audio in at least one of his clips was confirmed as authentic by news outlets). Two defendants in the January 6th riot also claimed that the videos they appeared in were deepfakes. Both were convicted.
Meanwhile, companies are taking advantage of the social turmoil to market fundamentally questionable products under the label “AI.” If such tools are used to categorize people and make consequential decisions about them, this could go horribly wrong. For example, recruitment firm Retrio claims that its AI predicts candidates’ suitability for a job based on video interviews, but one study found that the system was unable to detect the presence of glasses or a plain background. It turns out that it is possible to be fooled simply by replacing “ with a bookshelf.” It relies on superficial correlations.
AI is also now being used to deprive people of important life opportunities in areas such as healthcare, education, finance, criminal justice, and insurance. In the Netherlands, the Dutch tax authorities used AI algorithms to identify people committing child welfare fraud. They falsely accused thousands of parents, often demanding repayments of tens of thousands of euros. In the aftermath, the prime minister and his entire cabinet resigned.
In 2025, AI risks will come not from AI acting on its own, but from what humans do with AI. This includes cases that appear to be working well but are overly dependent on them (attorneys using ChatGPT). When it works well but is misused (non-consensual deepfakes and lying dividends). And if it simply is not fit for purpose (if it denies people rights). Mitigating these risks is a major challenge for businesses, governments, and society. It’s hard enough without worrying about science fiction.