Nvidia’s (NVDA) licensing deal with chip startup Groq (GROQ.PVT) shows how the tech giant is using its massive cash hoard to maintain its lead in the artificial intelligence market.
Nvidia said this week it reached a non-exclusive deal with Groq to license its technology and hired the startup’s founder and CEO Jonathan Ross, its president and other employees. CNBC reports that the agreement is worth $20 billion, making it the largest deal in Nvidia’s history. (The company declined a request to comment on the figure.)
Bernstein analyst Stacy Rasgon said in a note to clients on Thursday that Nvidia’s Groq deal “appears to make strategic sense for NVDA as they leverage their increasingly strong balance sheet to maintain dominance in key areas.” Nvidia’s cash inflows rose more than 30% in the most recent quarter from the previous year to $22 billion.
“The deal…is essentially an acquisition of Groq but is not labeled an acquisition (to avoid regulatory scrutiny),” Hedgeye Risk Management analysts added in a note on Friday.
The move is just the latest in a series of artificial intelligence deals for Nvidia, the world’s first company worth $5 trillion. The chipmaker’s investments in AI companies span the market, from large language model developers like OpenAI (OPAI.PVT) and xAI (XAAI.PVT) to “new clouds” like Lambda (LAMD.PVT) and CoreWeave (CRWV) that specialize in AI services and compete with its big tech customers.
Nvidia has also invested in chipmakers Intel (INTC) and Enfabrica. The company tried to acquire British chip architecture designer Arm (ARM) around 2020, but failed.
Nvidia’s wide-ranging investments – many of them in its own customers – have led to accusations that it engaged in circular financing schemes reminiscent of the dot-com bubble. The company has strongly denied these claims.
Groq, meanwhile, is looking to become one of Nvidia’s competitors.
Founded in 2016, Groq produces LPUs (Language Processing Units) for artificial intelligence inference and is marketed as an alternative to Nvidia GPUs (Graphics Processing Units).
Training an AI model involves teaching the model to learn patterns from large amounts of data, while “inference” refers to using a trained model to generate output. Both processes require the powerful computing power of artificial intelligence chips.
Although Nvidia easily dominates the market for AI training chips, some analysts believe it will soon see more competition in the inference space. That’s because custom chips like Google’s (GOOG) TPU (tensor processing unit) — and arguably the Groq chip called an LPU (language processing unit) — may be better suited for certain tasks. For example, LPU is faster and more power-efficient when used in some models because it utilizes a memory technology called SRAM within the chip. Nvidia GPUs, on the other hand, rely on off-chip HBMs made by companies like Micron (MU) and Samsung (005930.KS).