With the success of DeepSeek, Silicon Valley is dominating China’s competition. After Deep Shek released the latest model, AI Investors Panicked.
Silicon Valley melts beyond the deep seek, an emerging China competition, but the meta AI chief says that hysteria is unreasonable.
DeepSeek caused an alarm to a US AI company when the model was released last week. This is a third -party benchmark that exceeds the models of Openai, Meta, and other major developers. That was a subpurtips, and that’s a very small amount of money.
Bernstein Research has discovered that DeepSeek is much lower than Openai’s equivalent model. DeepSeek’s latest inference model R1 has $ 0.55 for each entering a million token, and O1’s O1 inference model has claimed $ 15 to the same number of tokens. The token is the minimum unit of the AI model process data.
The news appeared on the market on Monday, causing the sale of high -tech, which wiped out a market capitalization of $ 1 trillion. NVIDIA -It is known for at least $ 30,000 premium chips, but has lost nearly $ 600 billion.
However, Facebook AI Research Chief AI scientist Yann Lecun states that there is a “big misunderstanding” about how hundreds of millions of dollars invested in AI will be used. In a thread post, Lecun stated that AI companies were not mainly trained, but mainly for inference.
Progress is a process in which AI model applies training knowledge to new data. AI chatbot, such as Chatgptt, responds to user requests. More user demands require more reasoning and increasing processing costs.
Lecun stated that if the AI tools were more sophisticated, the cost of inference would increase. “If you put a video understanding, inference, large -scale memory, and other functions on the AI system, the reasoning cost will increase,” said Lecun, “The market reaction to DeepSeek is extremely unfair.” Ta.
THOMAS SOHMERS, the founder of Positron, a hardware startup of transformer inference, told Lecun that the reasoning was a larger share of AI infrastructure costs.
“Demand for inferences and infrastructure spending will rise rapidly,” he said. “If you look at the improvement of the cost of DeepSeek training, you are not aiming to infer, cost, and expenditure, miss the wooden forest.”
This means that as its popularity grows, Deepseek is expected to process more demands and spend a lot of money on inference.
More and more startups have entered the AI Progress Market, aimed at simplifying output generation. Due to the large number of providers, some providers in the AI industry expect the cost of inference will ultimately decrease.
However, this applies only to a system that processes inference on a small scale. Professor Warton, Ethan Morrick, states that it is likely that the cost of inference cost will be much higher for models, such as Deepseek V3, which provides free answers to large -scale user -based.
“Frontier model AI inference is expensive only on the size of a large -scale free B2C service (such as a customer service bot),” Mollick wrote in X in May. “Query costs are often very low for use in internal business, such as providing action items after the meeting or providing the first draft of analysis.”
In the past two weeks, major high -tech companies have strengthened their investment in AI infrastructure.
META CEO’s Mark Zuckerberg has announced more than $ 60 billion in 2025, planned in 2025, as it has strengthened its own AI infrastructure. In a thread post, Zuckerberg stated that the company could “significantly expand the AI team,” and said, “I have capital to invest in the coming years.” He did not say how much he would concentrate on inference.
Last week, President Donald Trump announced Stargate, a joint venture between Openai, Oracle, and SoftBank.