EFive years ago, President Vladimir Putin declared that mastering artificial intelligence (AI) would enable nations to become “rulers of the world.” Western technology sanctions after Russia’s invasion of Ukraine should have dashed his ambitions to take the lead in AI by 2030. However, that may be a hasty decision. Last week, Chinese research institute DeepSeek announced R1. This is an AI that analysts say is comparable to OpenAI’s top inference model, o1. Amazingly, it matches o1’s functionality, uses a fraction of the computing power, and costs 1/10th the cost. As expected, one of Putin’s first moves in 2025 was to align with China on AI development. R1’s launch appears to be no coincidence, coming just as Donald Trump endorsed his $500 billion Stargate project, which outpaces OpenAI’s peers. OpenAI named High Flyer Capital, DeepSeek’s parent company, as a potential threat. But at least three Chinese labs claim to match or exceed OpenAI’s achievements.
In anticipation of increased U.S. chip sanctions, Chinese companies have stockpiled critical processors to ensure their AI models can advance even with limited access to the hardware. DeepSeek’s success highlights ingenuity born of necessity. Even without large data centers or powerful dedicated chips, they made breakthroughs by improving data curation and model optimization. Unlike proprietary systems, R1’s source code is public and can be modified by any qualified person. However, there are limits to that openness. R1 is supervised by China’s Internet regulator and adheres to “socialist core values.” If you type in “Tiananmen Square” or “Taiwan,” the model will cut off the conversation.
DeepSeek’s R1 highlights the broader debate about the future of AI. Should AI remain confined within its own walls and controlled by a few large corporations, or should it be “open sourced” to foster global innovation? One of the Biden administration’s final actions was to crack down on open source AI for national security reasons. Freely accessible and powerful AI can empower bad actors. Interestingly, Trump later rescinded this order, arguing that curbing open source development would harm innovation. Open source advocates like Meta have a point in attributing recent advances in AI to a decade of free code sharing. However, the risks cannot be denied. In February, OpenAI shut down accounts associated with state-sponsored hackers from China, Iran, Russia, and North Korea who used its tools for phishing and malware campaigns. By summer, OpenAI had ceased service in these countries.
In the future, the United States’ superior control over critical AI hardware may give rivals little chance to compete. OpenAI provides “structured access” to control how users interact with its models. But DeepSeek’s success suggests that open source AI can drive innovation through creativity rather than brute processing power. The contradiction is obvious. Open source AI democratizes technology and accelerates progress, but it also enables criminal exploitation. Resolving this tension between innovation and security requires an international framework to prevent abuse.
The AI race is as much about global influence as it is about technological superiority. President Putin has called for developing countries to come together to challenge U.S. technology leadership, but without global regulation, pushing hard for AI dominance carries untold risks. . It would be wise to look to AI pioneer and Nobel Prize winner Jeffrey Hinton. He warns that the breakneck pace of progress reduces the likelihood of catastrophe. In the race to dominate this technology, the biggest risk is not keeping up. It’s completely out of control.