Yann LeCun, lead AI scientist at Meta and a pioneer in modern artificial intelligence, has declared that a true AI revolution is still on the horizon. Speaking at the 2024 K Science and Technology Global Forum in Seoul, hosted by South Korea’s Ministry of Science, LeCun emphasized the transformative potential of AI and warned against hasty regulations that could stifle innovation. rang.
“The true AI revolution is yet to arrive,” LeCun said in his opening speech, adding that AI is poised to redefine how humans interact with technology. “In the near future, all of our interactions with the digital world will be mediated by AI assistants,” he said, envisioning systems with human-like intelligence.
While LeCun acknowledged the advances made by generative AI models such as OpenAI’s ChatGPT and Meta’s Llama, he highlighted their limitations. “LLM can deal with language because it is simple and discrete, but it cannot deal with the complexities of the real world,” he explained. These systems do not have the ability to reason, plan, and understand the physical world in the same way that humans do.
To fill these gaps, LeCun unveiled Meta’s efforts to develop a new AI architecture that can observe and learn from the physical world, much like a baby interacts with its environment. This purpose-driven AI aims to build predictions, understand the complexities of the real world, and pave the way for more sophisticated generations of AI.
LeCun also championed the need for a collaborative open source AI ecosystem. He argued that for AI models to be truly effective, they need to be trained across diverse cultural backgrounds, languages, and values. “You can’t have a single organization somewhere on the West Coast of the United States training these models,” he said, arguing for global cooperation in AI development.
However, he warned that premature regulation could stifle innovation in the field. “Regulation can kill open source,” he said, appealing to governments to avoid restrictive laws that could impede technological progress. He emphasized that no AI system has proven to be inherently dangerous and suggested that concerns about AI risks should not hinder its development.