If 2023 was the year of wonders about artificial intelligence, 2024 was the year of trying to make those wonders do something useful without breaking the bank.
There was a “movement from presenting models to actually building products,” says Princeton University computer science professor and co-author of the new book “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t.” says Arvind Narayanan. and how to tell the difference. ”
The first 100 million or so people who tried ChatGPT when it was released two years ago actively sought out chatbots, finding them surprisingly useful at some tasks and laughably mediocre at others.
Today, such generative AI technologies are being incorporated into more and more technology services, whether we are looking for them or not. For example, through AI-generated answers in Google search results or new AI technology in photo editing tools.
“The main mistake in generative AI last year was that companies were releasing these very powerful models without concrete ways for people to utilize them,” Narayanan said. . “What we’re seeing this year is that we’re slowly building products that allow us to leverage these capabilities to help people.”
At the same time, since OpenAI released GPT-4 in March 2023 and competitors introduced AI large-scale language models with similar performance, these models have become significantly “bigger and qualitatively The hyperbolic expectations that AI would have some sort of competition every few months have been reset. Having greater intelligence than humans also means the public conversation has shifted from “Will AI kill us?” Narayanan said. We need to treat it like regular technology, he said.
At quarterly earnings calls this year, tech executives often heard questions from Wall Street analysts seeking guarantees of future returns on their massive spending on AI research and development. Building AI systems behind generative AI tools like OpenAI’s ChatGPT and Google’s Gemini requires investments in energy-intensive computing systems running on powerful and expensive AI chips. It requires so much electricity that the tech giant announced an agreement this year to use nuclear power to help generate electricity.
“We’re talking about hundreds of billions of dollars of capital being poured into this technology,” said Goldman Sachs analyst Kash Langan.
Another analyst at a New York investment bank made headlines over the summer, arguing that AI doesn’t solve complex problems that are worth the cost. He also questioned whether AI models, even though they are trained on much of the textual and visual data generated throughout human history, will ever be able to do what humans do so well. I also had doubts. Langan takes a more optimistic view.

“We had a strong interest in this technology that it was going to be absolutely game-changing. We’ve never seen anything like this in the two years since we introduced ChatGPT,” Langan said. Ta. “It’s more expensive than I expected, and it’s not as productive as I expected.”
But Langan remains bullish about the potential, saying AI tools have already been proven to “really increase productivity” in sales, design, and many other professions.
Some employees are wondering whether AI tools will be used to supplement or replace their jobs as technology continues to grow. Technology company Borderless AI uses Cohere’s AI chatbot to draft employment contracts for workers in Turkey and India without the help of outside lawyers or translators.
Video game performers from the Screen Actors Guild and American Federation of Television and Radio Artists who went on strike in July warned that AI could be used to recreate one performance into many others without their consent. He said he was concerned that AI could reduce or eliminate job opportunities. . Concerns about how movie studios use AI fueled last year’s film and television union strike, which lasted four months. Gaming companies have also entered into side agreements with unions codifying certain AI protections to continue working with stakeholders during the strike.
Musicians and writers have expressed similar concerns about AI scraping their voices and books. But Walid Saad, a professor of electrical and computer engineering at Virginia Tech and an AI expert, said generative AI still cannot create original works or “something completely new.”
“You’re more informed because you can train with more data. But having more information doesn’t mean you’re more creative,” he says. I did. “As humans, we understand the world around us, right? We understand physics. If you throw a ball on the ground, you know it’s going to bounce. I don’t understand.”
Saad cited memes about AI as an example of its shortcomings. When someone told the AI engine to create an image of salmon swimming in a river, it created a picture of a river with salmon fillets found at the grocery store, he said.
“What AI currently lacks is common sense that humans have, and I think that’s the next step,” he said.
Bijoy Pandey, senior vice president of Outshift, Cisco’s innovation and incubation division, said such inferences are an important part of the process of making AI tools more useful to consumers. AI developers are increasingly touting the next wave of generative AI chatbots as AI “agents” that can do more helpful things on people’s behalf.
That might mean asking AI agents vague questions and allowing the model to reason and plan steps to solve ambitious problems, Pandey said. He says many technologies will move in that direction by 2025.
Ultimately, Pandey said, AI agents will be able to come together to perform tasks in the same way that people come together to solve problems as a team, rather than simply performing tasks as individual AI tools. I predict it will be like this. He said future AI agents will work as an ensemble.
For example, future Bitcoin software will likely rely on the use of AI software agents, Pandey said. He said each of these agents has a specialty: “Some agents check accuracy, some agents check security, and some agents check scale.”
“We are getting closer to the future of agents,” he said. “All of these agents are very good at a particular skill, but they also have a little bit of personality and color, because that’s the way we work.”
AI tools have streamlined, and in some cases literally helped, the medical field. This year’s Nobel Prize in Chemistry (one of two Nobel Prizes awarded for AI-related science) was awarded to Google-led research that could help discover new drugs.

Virginia Tech’s Saad said AI is helping speed diagnosis by giving doctors a quick starting point when making patient care decisions. He said AI can’t detect disease, but it can quickly digest data and point out potential problem areas for real doctors to investigate. However, as in other fields, there is a risk of perpetuating falsehoods.
For example, tech giant OpenAI touts its AI-powered transcription tool Whisper as close to “human-level robustness and accuracy.” But experts say Whisper has major flaws. That is, they tend to make up chunks of text, and even entire sentences.
Cisco’s Pandey said that some of his company’s customers in the pharmaceutical industry are using AI to bridge the gap between “wet labs,” where humans perform physical experiments and research, and “dry labs,” where data is analyzed. He said that he found it useful. Computers are often used for modeling.
When it comes to drug development, he said, that collaborative process can take years, but with AI, that process can be shortened to days.
“For me, this was the most dramatic use,” Pandey said.
issued – December 31, 2024 8:40 AM IST