Google launched Gemini 3 on its own TPU chips and caused Alphabet stock to rise 12% while Nvidia dipped.

Table of Contents
Summery
  • TPUs offer a faster and specialized alternative to Nvidia GPUs for specific deep learning tasks like matrix math.
  • Nvidia retains a dominant market lead due to its entrenched CUDA software platform despite growing hardware competition.

Google launched Gemini 3 on its own TPU chips
Photo by BoliviaInteligente on Unsplash

The balance of power in the artificial intelligence hardware market is shifting for the first time in years. Google has firmly planted itself at the center of the conversation with the release of its Gemini 3 model. This new AI model was trained entirely on Google’s own custom silicon known as Tensor Processing Units. The success of this launch sent Alphabet shares soaring 12% since mid November. It also pushed the company’s market value to $3.86 trillion and solidified its spot as the third largest company in the world.

This surge comes at the expense of the reigning king of AI hardware. Nvidia saw its stock dip 3.4% during the same period as investors digested the news. The situation intensified following reports that Meta Platforms is in talks to purchase these Google chips for its own data centers. This domain has long been the exclusive territory of Nvidia and its graphics processing units. The market reaction suggests that Wall Street is finally seeing a viable alternative to the green team’s monopoly.

Google realized back in 2015 that standard hardware would not suffice for its scale. They needed a chip designed specifically for deep learning rather than graphics rendering. The result was the TPU. We are now seeing the seventh generation of these chips in action in 2025. They power everything from Google Maps to the new Gemini 3 model. Broadcom helps design these chips and has seen its own stock jump 16% as a direct beneficiary of this growing ecosystem.

The technical difference between the two chips is distinct and vital for investors to understand. Nvidia GPUs are incredibly versatile processors. They excel at splitting complex tasks into small pieces to run side by side. This makes them perfect for gaming and general AI research. Google TPUs are different. They are specialized expressly for matrix math and deep learning tasks. You can think of a GPU as an off road vehicle that can go anywhere while the TPU is a bullet train. It only goes from point A to point B but it gets there incredibly fast.

Major tech players are now desperate to diversify their supply chains. Apple used Google TPUs to train its Apple Intelligence models. The high flying startup Anthropic is also using them as part of a broad multicloud strategy. Anthropic has a valuation of $350 billion and refuses to rely on a single hardware vendor. They use Amazon Trainium and Google TPUs alongside Nvidia chips to lower their risk. This trend of diversification is the biggest long term threat to Nvidia.

Nvidia still holds a massive advantage that hardware alone cannot break. That advantage is software. The company introduced its CUDA software platform in 2004. It allows developers to program GPUs using standard languages like C. Almost every AI researcher on the planet knows how to use CUDA today. Google’s software stack is far less mature and much harder for the average developer to adopt. This software moat protects Nvidia’s 80% market share and its staggering 73% gross margins.

The real sign of trouble for Nvidia will appear in those profit margins. We will know the competition is hurting them when they are forced to lower prices to keep customers. That has not happened yet. Customers are still paying a premium for Nvidia products because alternatives like the TPU cannot yet meet every single need. The walls of the Nvidia fortress are high but Google has just shown the world that they are not unclimbable.