Openai is pushing for plans to reduce its dependence on Nvidia on chip supply by developing first-generation in-house artificial intelligence silicon.
The ChatGpt manufacturer will complete its first in-house chip design in the coming months and will send it to Taiwan Semiconductor Manufacturing Co for production, sources told Reuters. The process of sending the initial design through a chip factory is called “taping out.”
Openai and TSMC declined to comment.
This update shows Openai is on track to achieve its ambitious goal of mass production at TSMC in 2026. A typical tape-out is tens of millions of dollars, and it takes about six months to produce the finished chip unless Openai pays significantly more. Fast manufacturing. There is no guarantee that silicon will work with the first tape, and the failure requires the company to diagnose the problem and repeat the tape-out step.
Within Openai, training-centric chips are seen as a strategic tool to enhance Openai’s negotiation leverage with other chip suppliers, sources said. After the first chip, Openai engineers plan to develop an increasingly sophisticated processor with a wide range of features for each new iteration.
Once the first tape goes smoothly, ChatGpt manufacturers will be able to generate a large number of their first in-house AI chips, allowing them to test alternatives to Nvidia’s chips later this year. Openai’s plans to send designs to TSMC this year show that the startup has made rapid progress with its first design. Big companies like Microsoft and Meta have struggled to produce satisfactory chips despite years of effort.
The recent market defeat caused by Chinese AI startup DeepSeek also raised questions about whether there will be fewer chips needed to develop strong models in the future. The chip is designed by an in-house team at Openai, led by Richard Ho, who has doubled to 40 in the past few months, in collaboration with Broadcom. More than a year ago, Ho joined Openai from Alphabet’s Google to help lead the search giant’s custom AI chip program. Last year, Reuters first reported Openai’s plans with Broadcom.
Ho’s teams are smaller than large-scale initiatives at high-tech giants like Google and Amazon. According to industry sources with knowledge of chip design budgets, new chip designs for ambitious, large-scale programs could cost $500 million for a single version of the chip. These costs can be doubled to build the software you need and the surroundings around it.
The story continues under this ad
Generic AI model makers such as Openai, Google and Meta have demonstrated that even more chips can work together in their data centers to make their models smarter, resulting in an insatiable demand for chips. Meta said it would spend $60 billion on AI infrastructure next year, and Microsoft said it would spend $80 billion in 2025. Currently, Nvidia’s chips are the most popular, holding a market share of around 80%. Openai itself is participating in the $500 billion Stargate Infrastructure Program, announced last month by US President Donald Trump.
However, rising costs and reliance on a single supplier have led key customers, such as Microsoft, Meta, Openai, to explore internal or external alternatives to Nvidia’s chips.
Openai’s in-house AI chips can both train and run AI models, but are initially deployed on a limited scale and primarily for running AI models, sources say. This chip plays a restricted role within the company’s infrastructure.
To build an effort as comprehensive as Google and Amazon’s AI chip programs, Openai must hire hundreds of engineers.
The story continues under this ad
TSMC uses advanced 3-nanometer process technology to manufacture OpenAI AI chips. The chip features a commonly used systolic array architecture with high-bandwidth memory (HBM). This has extensive networking capabilities that Nvidia will use for its chips, sources say.