Openai is completing its first in-house chip design in the coming months and is expected to send it to Taiwan Semiconductor Manufacturing Co (TSMC) for production, Reuters reported. The efforts of ChatGpt makers to create first-generation AI chips are aimed at reducing their reliance on Nvidia.
The process of sending an initial design to a manufacturing chip fab is called a “taping out.” The tape-out process can take around six months and can cost tens of millions of dollars. There is also no guarantee that the chip will function as expected at the initial tapeout. There is also the issue where OpenAI has to diagnose it and repeat the entire process.
What does Openai do with the first generation AI chip?
With the initial tape out smoothly, Openai reportedly can mass-produce AI chips by 2026. Openai’s AI chip is based on TSMC’s 3-nanometer process technology and also uses a common systolic array architecture with high bandwidth. According to the report, Nvidia also uses memory used by Nvidia for its chip.
AI chips allow you to train and run AI models, but are only used to run AI models within a limited range. Initially, CHIP could play a limited role within the OpenAI infrastructure.
Why does Openai build its own AI chip?
Generated AI chatbots such as ChatGpt, Gemini, and Meta AI have traditionally required a large number of chips to train basic models. The powerful chips required for these operations are primarily supplied by NVIDIA, which has around 80% of the market share.
Openai’s efforts to build their own chips could potentially increase negotiation power with chip suppliers, including Nvidia. If the first chip is successful, Openai is reportedly planning to develop more sophisticated processors as the functionality increases with each generation.
Is Openai ahead of other big technologies?
Openai’s plans to send its first chip design to TSMC later this year will take years and go ahead of the curve. In particular, other large tech companies, such as Satya Nadella-led Microsoft and Mark Zuckerberg-led meta, have not been able to produce satisfactory chipsets after years.
Meanwhile, the recent rise of DeepSeek, built with just a small portion of the cost and computing power of CHATGPT and other Western AI chatbots, has led to the future chips needed to develop powerful large-scale language models. It suggests that there is less.