DeepSeek, a Chinese artificial intelligence (AI) company that develops large language models (LLMs), turned the world of AI on its head recently when it claimed that it spent just $5.6 million (note this is million, not billion) on computing power to develop its base AI model. That would be a fraction of what U.S. companies have been spending on computing power to build their AI models. And demand for infrastructure to power AI software is expected to be immense.
For example, Microsoft plans to spend $80 billion building out AI-capable data centers this year. Historically, about half of the spending on data centers goes toward servers. Meta Platforms, meanwhile, announced it would spend $65 billion this year on AI development, while the recently announced Stargate project backed by Oracle, OpenAI, and Softbank has plans to spend $500 billion on AI infrastructure over the next several years.
The claim that DeepSeek could build an LLM so cheaply sent shock waves through the markets last week, and Nvidia (NVDA -3.35%) was the biggest loser. Nvidia’s graphics processing units (GPUs) are central to the tech world’s AI infrastructure buildout as they are the primary source of the specific type of rapid computing power that AI systems require. The market’s logic was simple: If DeepSeek can create an LLM chatbot on par with (or better than) ChatGPT or Meta’s Llama using far less processing power, that does not bode well for GPU demand.
In the U.S., tech companies have been using steadily more GPUs to develop each new iteration of their AI models. For example, Meta is deploying 160,000 GPUs to train Llama 4 — 10 times as many as it used to train Llama 3. Elon Musk’s xAI, meanwhile, used 20,000 GPUs to train its Grok 2 model, while for Grok 3, it used 100,000 GPUs for phase 1 of its training, then boosted it to 200,000 for phase 2.
If effective models can be built using much less computing power, that could potentially be bad news for Nvidia.
Questionable costs and potential theft
While DeepSeek’s chatbot is widely viewed as being very good, there is no verification about how much it actually spent on computing power, how many GPUs it used, or which particular models they were. The company claims it used a little more than 2,000 GPUs to train its model and that it had access to 10,000 older Nvidia A100 GPUs.
Some experts do not believe it. Alexandr Wang, CEO of Scale AI, has said in interviews that it was his understanding that DeepSeek had access to about 50,000 more advanced Nvidia H100 chips, but that it can’t say so publicly because U.S. regulations forbid their export to China. There is a belief that China is getting high-end Nvidia chips through Singapore. Nvidia’s H100 chips cost $25,000 each, so 50,000 chips would have cost $1.25 billion. That’s far higher than the asserted development price tag of $5.6 million.
SemiAnalysis analyst Dylan Patel has estimated that DeepSeek and its parent company, Chinese hedge fund High-Flyer, have access to tens of thousands of Nvidia GPUs and have spent well north of $500 million on GPUs.
Meanwhile, OpenAI has recently said that it has proof that DeepSeek was trained in part by extracting data from ChatGPT’s model. Microsoft, an OpenAI investor, has found examples of data exfiltration through OpenAI developer accounts linked to DeepSeek. By extracting data from an established model, a company would be able to train a new model at a much lower cost through a process called distillation. Top White House AI advisor David Sacks has said that intellectual property theft may have indeed occurred, saying there was substantial evidence of it.
Jevons paradox
While there is a lot of doubt surrounding DeepSeek’s cost claims, and it appears that it could have gotten a leg up in its model’s development by distilling data from OpenAI, there is also an argument to be made that even if DeepSeek was able to develop an AI model at a much cheaper price, that wouldn’t necessarily hurt Nvidia. Jevons paradox is an economic theory that posits that when a resource becomes more efficient and costs are lowered, those lower costs lead to more consumption of the resource, and thus, the higher efficiency doesn’t negatively impact overall demand. With AI, the belief is that lower computing costs will increase the technology’s adoption.
Upon initially hearing of DeepSeek’s cost claims, Microsoft CEO Satya Nadella tweeted: “Jevons paradox strikes again! As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can’t get enough of!”
A number of Wall Street analyst firms, meanwhile, have said that this could be a good thing for Nvidia. That group includes Cantor Fitzgerald, which said that this will lead to the AI industry wanting more computing power, not less.
Is it time to buy the dip in Nvidia?
At this point, I think investors should be skeptical of DeepSeek’s claims. There is a lot of doubt about the costs used to build its model, and apparent evidence that it piggybacked off of OpenAI’s model.
In that light, the sell-off in Nvidia stock looks like a great buying opportunity. There is still a lot of AI infrastructure spending planned, and I don’t think DeepSeek’s claims are going to slow it down.
Trading at a forward price-to-earnings (P/E) ratio of 27 based on analysts’ 2025 estimates and a forward price/earnings-to-growth ratio (PEG) of under 0.85, the stock looks like a bargain. Stocks with positive PEG ratios below 1 are typically viewed as undervalued, and growth stocks often have PEGs much higher than that. For investors who have been waiting for a dip to buy Nvidia, this is their chance.
Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool’s board of directors. Geoffrey Seiler has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Meta Platforms, Microsoft, Nvidia, and Oracle. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.