Chinese AI startup DeepSeek started the year 2026 with the publication of research that could vindicate its earlier claims of training world-class models on a shoestring budget. That same approach has wiped nearly $600 billion off the market value of Nvidia in a single day last January. Furthermore, the release of chatbot last year, had made ChatGPT-maker issue companywide “code red,” while also worrying other AI giants like Google, Anthropic, Meta, and more.The Hangzhou-based company published a technical paper on January 1st that introduces “Manifold-Constrained Hyper-Connections” (mHC), a training method that will allow AI models to scale without ballooning computational costs. Co-authored by founder Liang Wenfeng, the breakthrough directly takes on the industry-wide consensus that the smarter AI is, the more computing power and chips it requires.
DeepSeek’s January 2026 research validates its January 2025’s market wave
The innovation arrives nearly a year after DeepSeek’s R1 model sent shockwaves through Silicon Valley by matching ChatGPT’s performance at reportedly a fraction of development costs. Back then, Nvidia scrambled to contain investor panic, insisting DeepSeek’s efficiency gains would actually boost chip demand during AI “inference”—when models serve users at scale.“Inference requires significant numbers of Nvidia GPUs,” the chipmaker stated in January 2025, attempting to reassure markets even as its stock plummeted 17%. Nvidia acknowledged DeepSeek had used export-compliant H800 chips to achieve competitive results, raising uncomfortable questions about whether US export controls were achieving their intended effect.DeepSeek’s latest mHC paper provides technical depth behind those earlier claims. Testing models up to 27 billion parameters, the company’s 19 researchers demonstrated “superior scalability” with “negligible computational overhead”—essentially proving one can build powerful AI without matching the massive chip purchases of OpenAI or Google.The release also triggered OpenAI CEO Sam Altman to declare “code red” in January 2025—one of several emergency mobilizations the company has initiated when competitive threats emerge. Altman recently told the Big Technology Podcast that OpenAI expects to sound such alarms “once maybe twice a year for a long time” as rivals close in.
Scaling debate intensifies in the industry as costs mount
The research sharpens a fundamental debate splitting AI leaders. Google DeepMind CEO Demis Hassabis told December’s Axios AI+ Summit that “we must push [scaling] to the maximum” to reach artificial general intelligence, though even he conceded the industry likely needs “one or two” breakthroughs beyond raw compute.Meanwhile, OpenAI’s Sam Altman wrote in his recent 10-year anniversary reflection that superintelligence appears “almost certain” within a decade—a timeline predicated on continued massive infrastructure spending across the industry.Industry analyst Wei Sun of Counterpoint Research called DeepSeek’s mHC method a “striking breakthrough” that enables the company to “bypass compute bottlenecks and unlock leaps in intelligence.” Omdia’s Lian Jye Su noted the willingness to publish such foundational research “showcases newfound confidence in the Chinese AI industry.”Expectations are building for DeepSeek’s next flagship model—possibly R2 or V4—around February’s Spring Festival, continuing the company’s pattern of releasing major innovations during China’s holiday period.