ACIAPR AI News

Artificial intelligence news curated with context, verified through reliable sources, and more...

AI News · Verified

Artificial intelligence news curated with context, verified through reliable sources, and more...

Browse AI developments across software, hardware, security, healthcare, and space with a clearer editorial experience built for discovery and trust.

hardware

Anthropic looks beyond NVIDIA and explores building its own chips to sustain the next wave of AI

Original source

Anthropic has just sent a signal that goes far beyond Claude or the chatbot race. According to Reuters, the company is exploring the possibility of designing its own AI chips, a move that is still in the early stages but makes one thing very clear: the next big AI war is no longer being fought only in models, but in the ability to secure enough hardware to train and serve them at global scale. The report lands only days after Anthropic announced a massive expansion of its partnership with Google and Broadcom to secure multiple gigawatts of next-generation TPU capacity, painting a sharp picture of the moment the industry is in right now: if a company wants to stay at the frontier, it is no longer enough to write better software; it increasingly needs to control the physical muscle underneath it.

Reuters reported that Anthropic has not made a final decision. The discussions are still at an early stage, the company has not settled on a specific design, and it has not even assembled a fully dedicated team for the effort. But even exploring that path is newsworthy because it reveals where the market is moving. Designing an advanced AI chip is not a cheap vanity project: according to Reuters’ sources, it can cost roughly $500 million once talent, validation, and manufacturing are factored in. If Anthropic is looking at that route, it is because relying only on buying other companies’ chips is starting to look strategically limiting.

The logic behind that interest is not hard to understand. Demand for Claude has surged in 2026. Anthropic itself said this week that its annualized revenue run rate has now surpassed $30 billion, up from roughly $9 billion at the end of 2025, and that more than 1,000 enterprise customers are already spending over $1 million per year with the company. Growth at that speed turns compute into an existential issue. If you have more customers, more inference demand, and more training ambition, it is no longer enough to obtain chips whenever they happen to be available; you need visibility years ahead, more controllable costs, and less exposure to outside bottlenecks.

That is where the context of Anthropic’s freshly announced agreement with Google and Broadcom matters. In its official statement, the company said it secured multiple gigawatts of next-generation TPU capacity expected to come online starting in 2027. It also reiterated that Claude already runs across a mix of hardware: AWS Trainium, Google TPUs, and NVIDIA GPUs. That diversity is not accidental. It is a way to spread risk, optimize workloads, and negotiate better with suppliers. But it also reveals an interesting tension: while Anthropic uses third-party infrastructure today, it may want a more proprietary piece of that stack tomorrow.

In other words, the possible move toward in-house chip design does not contradict Anthropic’s deal with Google and Broadcom; it complements it. A company can need massive external capacity today while exploring how to gain more autonomy tomorrow. In fact, that pattern is increasingly visible across major tech players. Reuters noted that Meta and OpenAI are also moving toward more customized AI silicon. The reason is straightforward: NVIDIA remains the dominant force in the market, but depending too heavily on any single supplier in an industry plagued by recurring scarcity is a major risk. Whoever integrates model development, cloud, energy, and silicon most effectively will gain an advantage that is much harder to copy.

There is also a broader industrial reading here. For a long time, the public conversation around AI revolved around benchmarks, flashy demos, and friendlier assistants. But this week’s developments push attention toward something less glamorous and far more decisive: infrastructure. Anthropic does not just need a better model; it needs billions of dollars in capacity, deep relationships with manufacturers, and the option to decide which architecture best suits its future. Frontier AI is starting to look less like a pure software race and more like a contest shaped by semiconductors, cloud platforms, energy, and industrial strategy.

For the broader public, the key takeaway is that the next AI leader will probably not be defined only by who offers the smartest responses or the slickest interface. It will also depend on who can guarantee enough compute to serve millions of requests, train more complex systems, and protect margins in an extremely expensive business. If Anthropic ultimately does build its own chips, it will not just be a technical move; it will be a declaration of strategic independence. And even if the idea is still early, the fact that it is already on the table shows just how much the AI battle is shifting from prompts all the way down to silicon.

Source: Reuters, Anthropic, CNBC