nvidia (NVDA) 2.63%)) Stocks are currently down 12% from an all-time high. In January, China-based Startup Deepseek claimed it had trained competitive artificial intelligence (AI) models using some of its computing capabilities deployed by major developers like Openai. I was struggling with the sharp sale.
Investors fear that DeepSeek’s technique will be adopted by other AI developers, and demand for Nvidia’s high-end graphics processing units (GPUs), the best hardware available to develop AI models, has dropped significantly. did. However, these concerns may have been exaggerated.
Google Parent Alphabet (Goog -0.54%)) (googl -0.49%)) On February 4th, CEO Sundar Pichai, a large buyer of Nvidia’s AI data center chips, made some comments to help Nvidia investors feel much better.

Image source: nvidia.
Deepseek Saga
Deepseek was founded in 2023 with the success of a Chinese hedge fund called High-Flyer. It has been building trading algorithms for many years using AI. Deepseek released the V3 leading language model (LLM) in December 2024, followed by the R1 Reasoning model in January, and its competitiveness with some of the latest models from Openai and other startups has made the tech sector lively. I did.
Because Deepseek’s work is open source, the industry quickly learned some important details. The startup claims it trained the V3 for just $5.6 million, according to SemianAlysis (which does not include an estimated $500 million in chips and infrastructure, according to SemianAlysis). To reach the current stage of development.
Deepseek also used older generation Nvidia GPUs like the H100 because the US government banned chip makers from selling modern hardware to Chinese companies (to protect American AI leadership) .
It turns out that DeepSeek has implemented several unique innovations on the software side to make up for its lack of computing power. We developed highly efficient algorithms and data entry methods, and also used a technique called distillation. This involves training smaller models using knowledge from large, already successful AI models.
In fact, Openai accuses DeepSeek of using the GPT-4O model to train DeepSeek R1, urging the ChatGPT chatbot to “learn” from the output. Distillation accelerates the training process quickly, as developers do not need to collect or process the mount. The result is much less computing power, meaning less GPU.
Naturally, investors are worried that if all other AI developers adopt this approach, it will cause a collapse in demand for Nvidia chips.
Nvidia is preparing for a record year of GPU sales
On February 26th, Nvidia will report that its 2025 financial results ended on January 31st. The company expects its total revenue to generate $128.6 billion. Recent quarter results suggest that roughly 88% of its revenues are attributable to the data center segment, thanks to a surge in GPU sales.
According to Wall Street consensus forecasts (provided by Yahoo), Nvidia was able to set another record for fiscal year 2026, with total revenue potentially generating revenue on cards. It’s easy to understand why investors are nervous about DeepSeek news, as hitting that estimate depends on further demand for GPUs from AI developers.
The H100 remains a hot product, but Nvidia’s latest GB200 GPU (based on Blackwell Architecture) can run AI inferences at up to 30 times faster. Inference is the process in which an AI model absorbs live data (such as chatbot prompts) and generates user output. It usually comes after the initial training phase (more on this soon).
The GB200 is now the gold standard for AI data centers, and when it began shipping to customers at the end of 2024, demand was significantly outpacing supply.

Image source: Alphabet.
Sundar Pichai Counter
Pichai held a conference call with Wall Street analysts on February 4th to discuss the results of the Alphabet’s fourth quarter 2024. In response to one of their questions, he said there has been a significant change in computing power allocation over the past three years, increasing towards inference compared to training.
Pichai said new inference models (such as Deepseek’s R1 and Alphabet’s Flash Thinking model) would only accelerate that shift. These models spend a lot of time “thinking” before creating a response, and require significantly more computing power than their predecessors. The technical term for that is test time scaling, a way for AI models to provide more accurate information without performing more training scaling (which involves supplying the model with an infinite amount of new data Includes).
Metaplatform CEO Mark Zuckerberg has the same idea. He recently said that lower training workloads do not necessarily mean that developers need fewer chips as capacity is simply shifting towards inference.
Finally, Alphabet plans to allocate $75 billion to capital expenditures (CAPEX) in 2025 to Wall Street, most of which will be directed towards data center infrastructure and chips. That figure represents a significant increase from the $52 billion CAPEX in 2024, so the company certainly hasn’t been pulled back.
Overall, the demand photos of Nvidia’s GPU still seem to be very unharmed. Considering that stocks are currently trading at attractive valuations, recent DIPs may even be a purchase opportunity.
Randi Zuckerberg, a former director of market development, Facebook spokeswoman and sister to Metaplatform CEO Mark Zuckerberg, is a member of Motley Fool’s board of directors. Suzanne Frey, an executive at Alphabet, is a member of the board of directors of Motley Fool. Anthony di Pizio does not occupy any of the stocks mentioned. Motley Fool has positions for Alphabet, Meta Platforms and Nvidia, and is recommended. Motley Fools have a disclosure policy.