There’s an expression that’s been popping up a lot lately on the internet, especially on Twitter and Reddit, where such conversations often take place. “Ninety-nine percent of people are walking around with no idea that the world is about to descend into chaos.” No, I’m not talking about global warming. Indeed, that’s true enough. Also, this is not the stock market crash that there is so much speculation about. That remains a possibility though. Instead, we will see AI systems that are so good that they are considered artificial general intelligence (AGI), or in other words, systems that are as capable as super-smart humans.
What would happen if such a system were introduced? No one has a concrete answer at this point. But it would be a landmark moment. For example, listen to Elon Musk. Just the other day, he said, “AI will eventually render money meaningless.”
The trigger for Musk’s tweet was the introduction of the OpenAI o3 model. Not yet AGI. And it is not yet available to the public. But since its release on December 22, o3 has proven to be a Pandora’s box. Over the past few years, numerous benchmarks have been created to assess the “intelligence” of AI systems. O3 scored brilliantly. When people start using the benchmark in a few months, if it holds true in terms of programming and coding, they could be on par with skilled software engineers around the world. They’re also almost as smart as humans at pattern recognition and problem solving, the kinds of problems humans solve as part of IQ tests. And they seem to be just as good as PhD academics when it comes to dealing with some of the most difficult math problems.
In other words, o3 is very good for AI systems. It will definitely arrive with a thousand cheers from its creator. Mark Chen, senior AI researcher, pointed out that o3 achieved ELO scores of 1800 to 2400 points in competitive programming. Not even OpenAI’s lead scientist, Jakub Pachocki, scored this high. In ARC-AGI, a pattern recognition benchmark created by former Google researcher François Cholet, o3 scores between 75 percent and 87 percent, depending on the amount of resources allocated. Smart humans score around 85%.
This score was enough to impress not only Chollet, but also Greg Kamrat, president of the ARC Awards Foundation. Kamrat said that in this test, it took the AI system five years to go from 0% to 5%, but then it took less than a year to reach 87%. The graph of progress is frighteningly exponential. Chollet, who is often seen as an AI skeptic on Twitter, is impressed. “o3 represents a significant advance in AI’s ability to adapt to new tasks. This is a system that can adapt to tasks it has never encountered before, and will likely achieve human-level performance in the ARC-AGI domain. “We’re getting closer to that,” he said.
The impact is mind-boggling. Looking at the advances in AI in 2024 (Google Gemini 2, Anthropic Claude 3.5 Sonnet, OpenAI o1 and o3), even the most hardcore skeptics believe that AI systems can already rival humans in routine and structured tasks. I believe that we will soon be able to rival humans. This includes graduate-level general text reading and average expert-level software programming. It also includes analyzing reports, creating summaries, paperwork, paperwork, and even spotting patterns in medical scans. Essentially, within a year, AI could be developed to reliably handle many tasks currently performed by humans.
In fact, in a few years’ time, systems like o3 may even be able to handle highly technical, yet logical and structured problems that are currently handled by highly skilled humans. I don’t know. For example, o3 seems to be good at math, and in the future you might be able to ask it to calculate the corners and contours of a planned flyover. Alternatively, you can ask them to draw up a blueprint for a hydropower dam. This is why Chollet writes in his analysis that “we need to plan for AI capabilities to compete with human work within fairly short timelines.”
A consensus is emerging that intelligence will become a settled and open-ended commodity within a few years. Last year, AI wizard Ilya Satskeva, who was OpenAI’s chief scientist until May, cryptically tweeted that mere “intelligence” was losing its importance. “If you value intelligence over all human qualities, you’re going to be in trouble,” he tweeted. It is unclear in what context Ilya said this, but he was probably talking about a world where “intelligence” would be almost free and abundant.
As intelligence becomes commonplace, and for many tasks it already is, some fundamental questions will arise. It may lead to a rethinking of the entire systems that govern our economies, societies, and even human relationships, most of which are based on transactions. But I believe the most important questions concern the nature of intelligence and what makes us human. These are abstract and philosophical questions that can no longer be answered simply by relying on your opposable thumb and pointing at the newspaper. Albert Camus once jokingly wrote, “You can describe modern man in one word: he fornicates and reads newspapers.” Well, not in the era of AGI.
I mentioned this in a previous article. At that point o3 was still many months away. I believe that this issue will become more urgent in the coming months and years. If not intelligence, then what makes humans special? “I think, therefore I exist” – René Descartes said. Despite its challenges, this axiom advocated by Descartes for centuries has rightly become one of the pillars on which humanity must lean. Will it survive the era of super smart AI? Not sure.
I decided to ask this question on ChatGPT for discussion. The GPT that responded was GPT 4o, an old and outdated AI system. However, we used it because it is the most advanced free version available to users. 4o discussed the question of whether Descartes’ axioms apply to it, and rightly argued that “doubt,” one of the central ideas behind the term, is not part of an AI system. I must say that the entire discussion by ChatGPT about why it’s not a creature from Descartes’ world was coherent and quite intelligent. I pushed it even further and argued over and over again. In the end, I think I came up with a sentence that sums up the current discussion. o3 indicates that AI systems are getting closer to mastering logical and academic knowledge. But intelligence may be more than just the ability to calculate polynomials. It may be something more than that, what ChatGPT calls “the boundary between simulated thinking and real intelligence.” And, in ChatGPT’s words, it’s a “philosophical and practical question that we’re still exploring.”
(Javed Anwer is Technology Editor at India Today Group Digital. He has been writing about personal technology and gadgets since 2005)
(The views expressed in this opinion article are those of the author)