Deepseek may have taken the world by storm with its AI models, but humanity’s CEO, Dario Amodei, is not satisfied with the safety guardrails the model has introduced.
“Do you have anything to say to Deepseek?” Amodei was asked on the podcast. “I don’t know,” he replied. “They seem to be talented engineers. I think the main thing I say to them is that they take these concerns about the autonomy of AI systems seriously. When I run the evaluation on the DeepSeek model – If the model can generate information about biological weapons that cannot be found on Google or easily found in textbooks, then there is a degree of national security assessment for the model – Deepseek model is completely blocked for generating this information We basically did the worst thing about every model we’ve tested in that we didn’t have at all,” he added.
![](http://officechai.com/wp-content/uploads/2025/01/Aerd92BCnqVQkLxxmhLdvA-1200-80-1024x576.jpg)
![](http://officechai.com/wp-content/uploads/2025/01/Aerd92BCnqVQkLxxmhLdvA-1200-80-1024x576.jpg)
Amodei revealed that the Deepseek model itself is likely not dangerous, but warned AI compatibility about thinking deeply about AI safety. “I don’t think today’s models are literally dangerous. I think we’re in the exponential function, like everything else, but I think it might be later this year, perhaps next year. And my advice to Deepseek is to take these safety considerations seriously about the safety of these AI. As you know, the majority of American AI companies have come to terms with these on AI autonomy. He said there are issues, and these issues regarding AI misuse are serious and potentially real issues,” he added.
“My number one hope is for (Deepseek engineers) to work in the US and they work for us and another company. My second hope is if they’re going to do it. If not, you should know.
Amodei appears to say that Deepseek had released very performant models, but there were no safety guardrails that many US-made models have. This is not the first time that Amodei appears to be criticizing Deepseek. When the model first appeared, he found that Chinese companies could not compete with American companies, and America led the AI race. It is also understandable that humanity itself is evaluating new AI models for how safe they are. However, Deepseek is creating quite an extraordinary situation for Tech. So far, US companies have criticized Chinese people for being too censored and too much control, but at one point it appears that Chinese people have created products that appear to be less restrictive. They themselves.