OpenAI Builds First Chip With Broadcom and TSMC, Scales Back Foundry Ambition

OpenAI is partnering with Broadcom and TSMC to design its first in-house AI chip while supplementing its infrastructure with AMD chips, aiming to diversify its reliance on Nvidia GPUs. “The company has dropped the ambitious foundry plans for now due to the costs and time needed to build a network, and plans instead to focus on in-house chip design effort,” adds Reuters. From the report: OpenAI has been working for months with Broadcom to build its first AI chip focusing on inference, according to sources. Demand right now is greater for training chips, but analysts have predicted the need for inference chips could surpass them as more AI applications are deployed. Broadcom helps companies including Alphabet unit Google fine-tune chip designs for manufacturing and also supplies parts of the design that help move information on and off the chips quickly. This is important in AI systems where tens of thousands of chips are strung together to work in tandem. OpenAI is still determining whether to develop or acquire other elements for its chip design, and may engage additional partners, said two of the sources.

The company has assembled a chip team of about 20 people, led by top engineers who have previously built Tensor Processing Units (TPUs) at Google, including Thomas Norrie and Richard Ho. Sources said that through Broadcom, OpenAI has secured manufacturing capacity with Taiwan Semiconductor Manufacturing Company to make its first custom-designed chip in 2026. They said the timeline could change. Currently, Nvidia’s GPUs hold over 80% market share. But shortages and rising costs have led major customers like Microsoft, Meta, and now OpenAI, to explore in-house or external alternatives.

Read more of this story at Slashdot. Read More

Leave a Comment

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *