Remember when OpenAI’s nonprofit board unceremoniously fired Sam Altman? The CEO was not “consistently candid” with board members? This claim led to a four-day stay in the wilderness. A year later, Altman is less consistent about the future of artificial intelligence. In an interview with Bloomberg Businessweek published on Monday, Altman once said that when OpenAI would build artificial general intelligence (AGI), the theoretical threshold for AI to surpass human intelligence, was “completely random.” He admitted that he had come up with a date. The year is 2025, 10 years after the company was founded.
Mr. Altman’s candor about this mistake was momentarily refreshing, but in the same interview he made a refreshingly different prediction. “I think AGI will probably be developed during this presidential term,” he said. In a personal blog post on Monday, he made a bigger claim: AI “agents” will join the workforce this year and “sharply change the way companies produce.”
Altman has become a master at balancing humility and hype. He will acknowledge past speculations while making new, equally speculative predictions about the future, creating a confusing cocktail of distractions from thorny current issues. Take everything he says with a pinch of salt.
Technology company leaders have long tried to sell us a mirage of the future. Elon Musk claimed to have self-driving taxis on the roads by 2020, and Steve Jobs was famously mocked in the reality distortion field. But Altman’s strategic ambiguity is more sophisticated. Because he mixes his claims with apparent candor, such as tweeting on Monday that OpenAI is losing money because its premium service is so popular, and acknowledging earlier speculation about AGI. Body. Doing so may make other predictions and claims more reliable.
The stakes are different from those of Mr. Musk, who sold cars and rockets, and Mr. Jobs, who sold consumer products. Mr. Altman is marketing software that has the potential to change the education and employment of millions of people, just as the Internet itself has changed almost everything, and his prediction is that no one will be left behind. It could help guide the decisions of fearful companies and governments.
For example, one of the risks is that regulation could be weakened. AI safety institutes have been established in several countries in 2024, including the US, UK, Japan, Canada, and Singapore, but global oversight could be set back this year. Eurasia Group, a policy research firm founded by American political scientist Ian Bremer, cites the relaxation of AI regulations as one of the biggest risks in 2025. Bremer noted that President-elect Donald Trump is likely to rescind President Joe Biden’s executive orders on AI, and that the UK-hosted International AI Safety Summit Series will be held in Paris this year. The event will be renamed the “AI Action Summit” (which will also host promising startups like Mistral AI).
In a way, Altman’s comments about the impending arrival of AGI help justify the shift from “safety” to “action” at these summits. Because it seems more difficult to set up meaningful oversight when things are moving this fast. The message is: “This is happening so fast that traditional regulatory frameworks won’t work.” And Altman is inconsistent in how he talks about AI safety. He talked about its importance in a blog post on Monday, but downplayed it in an interview with New York Times reporter Andrew Ross Sorkin at the Dealbook Summit in December, saying, Many of the safety concerns expressed by people do not actually consider safety.” Instead of an AGI moment coming, you can build AGI and the world will go on pretty much the same way, the economy will go faster, and things will grow faster. ”
That’s a persuasive story for a political leader already leaning towards light regulation, like Mr. Trump, to whom Mr. Altman donated $1 million to his inaugural fund. The problem is that the promise of a bright future constantly distracts from immediate issues, such as the impending disruption that AI will bring to labor, education, and the creative arts, as well as the bias and security issues that generative AI still suffers from. That’s it.
Asked by Bloomberg about the energy consumption of AI, Altman quickly brought up untested new technology as an answer. “Nuclear fusion will work,” he replied, referring to the still theoretical process of extracting large-scale power from nuclear fusion. “Soon,” he added. “Well, there’s going to be a demonstration of net gain fusion soon.” Coincidentally, fusion has been the subject of overly optimistic predictions for decades, and in this case Altman himself It used nuclear fusion as a means to deflect issues that threatened to curb its ambitions.
Altman appears to be operating a more sophisticated iteration of the Silicon Valley hype machine. This matters because he’s not just selling a service, he’s shaping how companies and policymakers view AI at key moments, especially when it comes to regulation. He says AGI will emerge during President Trump’s term, but the world will go on. There is no need for excessive checks and balances. That’s far from the truth.
(Parmy Olson is a Bloomberg opinion columnist who writes about technology and AI)
(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)