Big Tech companies are starting to look like IBM in the 1960s

The race to dominate the emerging artificial intelligence market is pushing tech giants to adopt business models reminiscent of IBM (IBM) in the 1960s.

Big tech “hyperscalers” Alphabet (GOOG, GOOGL), Meta (META), Microsoft (MSFT) and Amazon (AMZN) are all in various stages of developing their own custom AI chips to put into their data centers and power their cloud and software products. Alphabet, the furthest along of the four companies, is even reportedly in talks to sell its physical chips, called TPUs, to Meta, a move that would put it head-to-head with leading chipmaker Nvidia (NVDA).

These efforts have prompted analysts at Bloomberg Intelligence to predict that the custom AI chip market will grow to $122 billion by 2033.

Big tech companies are making their own components beyond chips: Microsoft and Amazon are actively investing in dark fiber, underground fiber optic cables that are currently unused, RBC Capital Markets analyst Jonathan Atkin said in a recent note to clients. Google and Meta also own their own cables but still buy them from third parties, he wrote. These cables are necessary to connect corporate data centers to the businesses that use them.

The dynamic of cloud providers manufacturing their own components (hardware) to run their core products (software) signals Silicon Valley’s return to vertical integration—an operating model pioneered by oil and steel barons in the late 19th century and adopted by IBM during the digital revolution.

See also  Replacing the irreplaceable: Potential Mohamed Salah successors Liverpool could consider

IBM was one of the most successful vertically integrated companies in the 1960s, when it manufactured hardware components for its mainframes, or mainframe computer systems. IBM’s strategy stemmed from the idea that making its own specialized parts would improve its end product (the mainframe) and profit margins, and out of concerns about supply shortages of parts for early computers. It worked: In 1985, the company accounted for more than half of the computer industry’s market value, Carliss Y. Baldwin noted in her book “Design Rules.”

Of course, it all fell apart later. Falling semiconductor production costs in the 1990s and the rise of software giant Microsoft and chip leader Intel weakened IBM’s once-strong competitive moat, and by 2000 the company no longer claimed to be vertically integrated, Baldwin said.

Just as the advent of computers propelled IBM toward vertical integration, the proliferation of artificial intelligence since the launch of ChatGPT in late 2022 has put today’s cloud giants on a similar trajectory. In particular, the high cost of Nvidia chips and their limited availability have prompted the tech giant to advance the development of its artificial intelligence chips. These custom chips are cheaper and better optimized for the company’s software.

Nvidia founder and CEO Jensen Huang speaks with a Rubin GPU and Vera CPU at an Nvidia press conference before the CES technology show in Las Vegas on Monday, January 5, 2026. (AP Photo/John Locher)
Nvidia founder and CEO Jensen Huang holds Rubin GPU and Vera CPU at the CES technology show in Las Vegas on January 5, 2026. (AP Photo/John Locher) · Associated Press

“Hyperscalers…recognize that having a single vendor for AI computing poses serious strategic dangers,” said Seaport analyst Jay Goldberg. “So they now have a very strong strategic reason to produce their own chips.”

Meta reportedly began testing in-house AI chips for training models last year and recently acquired chip startup Rivos to accelerate its custom semiconductor work. Google’s TPUs have become so advanced that Anthropic (ANTH.PVT), OpenAI (OPAI.PVT) and even rival Meta have signed major cloud deals with the company to access them. After long delays, Microsoft launched its next-generation Maia 200 chip in January.

See also  Joe Brady Bills press conference live updates, highlights as Buffalo introduces new head coach

During Yahoo Finance’s recent visit to Amazon’s chip lab and nearby data center in Austin, Texas, the company showed off its latest UltraServers, a server cluster that includes Amazon’s latest generation of in-house GPUs (called Trainium), CPU Gravitons, and custom network cables and switches to connect them. Amazon still sells more AI compute in its remote data centers powered by Nvidia GPUs than in its custom accelerators, but the tech giant is increasingly emphasizing the advantages of its in-house hardware.

Amazon Web Services technical director Paul Roberts told Yahoo Finance that its Trainium3 chips can provide its cloud customers with up to a 60% price-performance advantage compared to GPUs used for inference workloads.

“I think what we’re seeing in the market is a lot of validation of this approach [of making custom chips] — versus using some kind of general-purpose GPU — now you can have these specialized processors and accelerators that allow for incredible energy efficiency savings,” he said.

Such energy savings will become even more important as the AI ​​data center boom begins to feel the impact of power constraints.

But Seaport’s Goldberg believes the trend toward vertical integration has reached its “extreme limits” and not all big tech companies will succeed.

“If you want to design a leading chip, that’s a huge expense,” he said, adding, “There are only so many companies that can afford it.”

StockStory is designed to help individual investors beat the market.
StockStory is designed to help individual investors beat the market.

Click here for the latest tech news that will impact the stock market

See also  Yale grad student killed in what investigators feared was a perfect murder

Read the latest financial and business news from Yahoo Finance

Spread the love

Leave a Reply

Your email address will not be published. Required fields are marked *