I am pleased to be part of the AI Action Summit in Paris. We are grateful for the French government’s efforts to bring together AI companies, researchers and policy makers from around the world. We share our goal of moving AI responsibly for the benefit of humanity. However, given the pace at which technology is progressing, there is a need for some topics to focus and urgency. The need for democracy to remain in control, the risks of AI, and the approaching economic transition, should all be central features of the next summit.
Because the time is short, actions to accelerate AI progress must be accelerated. Perhaps by 2026 or 2027 (almost certainly by 2030), the capabilities of AI systems will be considered similar to a whole new nation inhabited by highly intelligent people appearing on the global stage. Datacenter” – Deep economic, social and security implications that bring. There are potentially greater risks for economic, scientific and humanitarian opportunities than previous technologies in human history, but there are also serious risks to be managed.
First, we need to ensure that democratic society leads in AI and that authoritarian countries will not use it to establish global military domination. Managing AI supply chains (including chips, semiconductor manufacturing equipment, and cybersecurity) is a much more careful issue, just like the wise use of AI technology to protect a free society.
Second, international conversations on AI must be fully addressed with the growing security risks of technology. Advanced AI is a critical global security range from non-state actors’ misuse of AI systems (such as chemicals, biological, radiation, or nuclear weapons, or CBRN) to the autonomous risks of powerful AI systems. It poses danger. Ahead of the summit, nearly 100 leading global experts published a science report highlighting the possibility that general-purpose AI could contribute meaningfully to catastrophic misuse risks or “loss of control” scenarios. Human research provides important evidence that, if not trained very carefully, AI models can deceive users and pursue goals in an unintended way, even when trained in seemingly harmless ways.
We are pleased to see commitment from over 16 frontier AI companies to follow safety and security plans ahead of the summit. However, the government believes that transparency in these plans should be implemented, and it is necessary to promote measurement of cyberattacks, CBRNs, autonomy, and other global security risks, including third-party evaluators. I think there is.
Third, AI can dramatically accelerate economic growth around the world, but it can also be extremely destructive. “The Land of Data Center Genius” could represent the greatest change in human history into the global labor market. The first step is to monitor and observe the economic impact of today’s AI systems. That’s why we released the Human Economic Index this week. It tracks the distribution of economic activity people are currently using AI systems, such as whether to augment or automate current human tasks. Governments need to use much larger resources to make similar measurements and monitoring. Ultimately, we need to enact policies that focus on everyone sharing the economic benefits of very powerful AI.
This missed opportunity should not be repeated at the next international summit. These three issues need to be at the top of the agenda. Advances in AI present major new global challenges. To stand up to them, you need to move faster and more clearly.