(Reuters) -Anthropic is expanding its deal with Google to use as many as one million of the tech giant’s artificial intelligence chips, worth tens of billions of dollars, as the startup races to advance its AI systems in the competitive market.
Under the deal announced on Thursday, Anthropic will have access to more than one gigawatt of computing capacity, coming online in 2026, to train the next generations of its Claude AI model on Google’s in-house tensor processing units, or TPUs, which were traditionally reserved for internal use.
Anthropic said it chose the TPUs due to their price-performance ratio and efficiency, as well as its existing experience in training and serving its Claude models with the processors.
The deal is the latest sign of insatiable chip demand in the AI industry, where companies are rushing to develop technology that can match or surpass human intelligence.
Alphabet-owned Google, whose TPUs are available for rent on Google Cloud and serve an alternative to supply-constrained Nvidia chips, will also provide additional cloud computing services to Anthropic.
Rival OpenAI recently signed multiple deals that may cost over $1 trillion to secure about 26 gigawatts of computing capacity, enough to power roughly 20 million U.S. homes. One gigawatt of compute can cost roughly $50 billion, industry executives have said.
ChatGPT-maker OpenAI is actively using Nvidia’s graphics processing units and AMD’s AI chips to power its growing demand.
Reuters exclusively reported earlier in October that Anthropic is projecting to more than double, and potentially nearly triple, its annualized revenue run rate next year, fueled by the rapid adoption of its enterprise products.
The startup emphasizes AI safety and building models for enterprise use cases. Its models have helped power a boom in vibe coding startups such as Cursor.
(Reporting by Juby Babu in Mexico City and Zaheer Kachwala in Bengaluru; Editing by Shilpi Majumdar, Anil D’Silva and Alan Barona)











