Qualcomm CEO Cristiano Amon
Qualcomm
Deep Seek Moment has crashed most semiconductor stocks as investors fear that demand for data center AI chips is declining, but these new, smaller AI models are mere tickets for device AI. “Deepseek R1 and other similar models have recently demonstrated that AI models have developed faster, become smaller, more capable, more efficient and can be run directly on devices,” and less than a week later At home, the DeepSeek R1-DISTILL model was running on a PC and smartphone equipped with a Qualcomm Snapdragon. (Qualcomm is a Cambrian-AI research client.)
Both Apple and Qualcomm benefit from these new models, but Qualcomm can quickly apply these models to anything other than smartphones. The company has a strong position in other markets, including automobiles, robotics, and VR headsets, as well as the company’s emerging PC business. All these markets benefit from new small models and applications built on them.
Apple is renowned for its beautiful, fully integrated designs, but Qualcomm partners with others to design and build the final product, allowing for wider adoption. Qualcomm Snapdragon Chips, for example, has gained over 70% market share in both Meta Quest and Ray-Ban headsets.
Key trends to accelerate on-device AI
Both Qualcomm and Apple are working hard to reduce model sizes through low-precision mathematics and model optimization techniques such as pruning and sparse. Distillation is currently showing incremental functional improvements in the quality, performance and efficiency of AI models that can be run on devices. And these small models do not require users to compromise.
These new, cutting edge, small AI models have excellent performance thanks to techniques such as model distillation and new AI network architecture. This simplifies the development process without sacrificing quality. These small models are better than larger models that actually only work in the cloud.
Furthermore, model sizes continue to decline rapidly. State-of-the-art quantization and pruning techniques allow developers to reduce model size without material loss accuracy.
Additionally, Qualcomm believes AI is becoming a new user interface thanks to the new trends in AI agents. Personalized multimodal agents simplify interactions and complete skilled tasks in a variety of applications.
The table below shows that both the Deepseek Qwen model and the Metalama model distilled versions work just as or more than the larger and more expensive cutting edge models of Openai and Mistral. The GPQA Diamond Benchmark is particularly interesting. The model contains deep multi-step inferences to solve complex queries, which makes many models find challenging.
The new deepseek-R1 shows significantly better results (precision) in all mathematics and coding …(+)
Qualcomm
So, do you really need AI on the device?
Market skepticism regarding device AI is rapidly declining. Below is an example of the use cases provided by Qualcomm: Imagine you’re driving together, and imagine one of your passengers mentioning coffee. The LLM agent hears this and suggests where you can stop and grab the cup. Local driving LLM and ADAS systems are local, so cloud-based AI cannot perform this task. This is just an example of how agents translate AI, and is a particularly useful on-device.
This is a great use case for LLM agents in cars. Coffee someone?
Qualcomm
So, isn’t the AI world crashing?
At least not. In fact, these new models can be said to be a turning point for ubiquitous AI. Smaller, more efficient, and accurate AI models are the key to making AI broad and affordable. As a result, techniques demonstrated by DeepSeek are already applied by mainstream AI companies to remain competitive and avoid the censorship and security pitfalls that DeepSeek presents.
And Qualcomm is probably the biggest winner of this evolution of a model aimed at affordable AI that runs on devices that are already billions of numbers.
Disclosure: This article expresses the author’s opinion and should not be taken as advice to buy or invest from the company mentioned. My company, Cambrian and AI research, is Brainchip, Cadence, Cerebras Systems, D-Matrix, Esperanto, Groq, IBM, Intel, Micron, Nvidia, Qualcomm, Graphcore, sima.ai.ai.ai.ai.ai.ai.ai.ai.ai. I am fortunate to have many semiconductor companies as clients, such as ai.ai.ai.ai.ai.ai.aigranto. , Synopsys, Tenstorrent, Ventana Microsystems, and numerous investors. None of the companies mentioned in this article have investment positions. For more information, please visit our website https://cambrian-ai.com.