The Chinese start-up DeepSeek stunned the world and roiled stock markets last week with its release of DeepSeek-R1, an open-source generative artificial intelligence model that rivals the most advanced offerings from U.S.-based OpenAI—and does so for a fraction of the cost. Influential tech investor Marc Andreessen called the model “one of the most amazing and impressive breakthroughs” he’d ever seen. U.S. President Donald Trump said it was a “wake-up call.”
DeepSeek’s extraordinary success has sparked fears in the U.S. national security community that the United States’ most advanced AI products may no longer be able to compete against cheaper Chinese alternatives. If that fear bears out, China would be better equipped to spread models that undermine free speech and censor inconvenient truths that threaten its leaders’ political goals, on topics such as Tiananmen Square and Taiwan. As these systems grow more powerful, they have the potential to redraw global power in ways we’ve scarcely begun to imagine. Whichever country builds the best and most widely used models will reap the rewards for its economy, national security, and global influence.
China’s catch-up with the United States comes at a moment of extraordinary progress for the most advanced AI systems in both countries. Last September, OpenAI’s o1 model became the first to demonstrate far more advanced reasoning capabilities than earlier chatbots, a result that DeepSeek has now matched with far fewer resources. But these models are just the beginning. On Friday, OpenAI gave users access to the “mini” version of its o3 model. OpenAI’s not-yet-released full o3 model has reportedly demonstrated a dramatic further leap in performance, though these results have yet to be widely verified. This acceleration of progress in capabilities has led some leading AI figures, such as Dario Amodei, the CEO of OpenAI competitor Anthropic, to predict that we could see AI that is “smarter than almost all humans at almost all things” by 2026 or 2027.
Projections of future AI capabilities are deeply contested, and claims made by those who financially benefit from AI hype should be treated with skepticism. But over the past two years, a growing number of experts have begun to warn that future AI advances could prove catastrophic for humanity. These include Geoffrey Hinton, the “Godfather of AI,” who specifically left Google so that he could speak freely about the technology’s dangers. Many of the world’s top scientists have co-signed a simple but chilling statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Predicting what a future threat from advanced AI might look like is a necessarily speculative exercise that veers into the realm of science fiction and dystopia. One of the most common fears is a scenario in which AI systems are too intelligent to be controlled by humans and could potentially seize control of global digital infrastructure, including anything connected to the internet. Some experts dismiss these notions and believe that such extraordinary capabilities are far off or, even if they arrived, would not result in loss of human control over AI systems.
This scientific uncertainty puts policymakers in a tricky spot. But it’s not the job of policymakers to adjudicate which camp is right. It is their job, however, to prepare for the different contingencies, including the possibility that the dire predictions come true.
Given the complex and fast-evolving technical landscape, two policy objectives are clear. The United States must do everything it can to stay ahead of China in frontier AI capabilities. And it must also prepare for a world in which both countries possess extraordinarily powerful—and potentially dangerous—AI systems. In essence, it needs to dominate the frontier of AI and simultaneously defend against the risks.
Of these two objectives, the first one—building and maintaining a large lead over China—is far less controversial in U.S. policy circles, where it is widely seen as a core imperative for technology and security policy today.
But the way the United States should pursue that objective is hotly contested. Two of the key ingredients in AI—data and the technical talent needed to craft these systems—are critical aspects of competitiveness, but they’re harder for policymakers to directly affect. However, compute, the term for the physical hardware that powers algorithms, is much easier to govern.
Given this, the United States has focused its efforts on leveraging its control of the semiconductor supply chain to restrict China’s access to high-end chips. The success of DeepSeek’s new model, however, has led some to argue that U.S. export controls on chips are counterproductive or just futile. They point to China’s ability to use previously stockpiled high-end semiconductors, smuggle more in, and produce its own alternatives while limiting the economic rewards for Western semiconductor companies.
We argue that to relax export controls would be a mistake—they should instead be strengthened. It’s true that export controls have forced Chinese companies to innovate. But export controls are and will continue to be a major obstacle for Chinese AI development. Just ask DeepSeek’s own CEO, Liang Wenfeng, who told an interviewer in mid-2024, “Money has never been the problem for us. Bans on shipments of advanced chips are the problem.” The company has been extraordinarily creative and efficient with its limited computing resources. If it had even more chips, it could potentially build models that leapfrog ahead of their U.S. competitors.
Export controls are never airtight, and China will likely have enough chips in the country to continue training some frontier models. But reducing the total volume of chips going into China limits the total number of frontier models that can be trained and how widely they can be deployed, upping the chances that U.S. models both take the lead and get adopted more widely, including for national security applications.
If Washington wants to regain its edge in frontier AI technologies, its first step should be closing existing gaps in the Commerce Department’s export control policy. After the first round of substantial export controls in October 2022, China was still able to import semiconductors, Nvidia’s H800s, that were almost as powerful as the controlled chips but had been specifically designed to circumvent the new rules. These loopholes remained open until a revised version of the export controls came out a year later, giving Chinese developers ample time to stockpile high-end chips. After those 2023 updates, Nvidia created a new model, the H20, to fall outside of those controls. The H20 is the best chip China can access for running reasoning models such as DeepSeek-R1. Washington needs to control China’s access to H20s—and prepare to do the same for future workaround chips.
The second objective—preparing to address the risks of potential AI parity—will be trickier to accomplish than the first. But it is equally urgent. In the United States, the need to seriously prepare for the consequences of AI parity is not yet widely accepted as a policy priority. But the technical realities, put on display by DeepSeek’s new release, are now forcing experts to confront it.
DeepSeek’s remarkable results shouldn’t be overhyped. The DeepSeek-R1 model didn’t leap ahead of U.S. competitors in terms of capabilities; its triumph was one of efficiency, roughly equaling those models’ performance on a much lower compute budget. But DeepSeek and other advanced Chinese models have made it clear that Washington cannot guarantee that it will someday “win” the AI race, let alone do so decisively. And if some AI scientists’ grave predictions bear out, then how China chooses to build its AI systems—the capabilities it creates and the guardrails it puts in—will have enormous consequences for the safety of people around the world, including Americans. Pretending this isn’t a possibility would be an abdication of responsibility that puts lives at risk.
One key step toward preparing for that contingency is laying the groundwork for limited, carefully scoped, and security-conscious exchanges with Chinese counterparts on how to ensure that humans maintain control over advanced AI systems.
People on opposite sides of U.S. debates about how to deal with China incorrectly argue that the two objectives outlined here—intense competition and strategic dialogue—are incompatible, though for different reasons.
Doves fear that aggressive use of export controls will destroy the possibility of productive diplomacy on AI safety. Admittedly, it’s difficult to engage when relations are strained. But constantly worrying about whether U.S. policies offend Beijing makes Washington an easy mark in any negotiations.
Hawks, meanwhile, argue that engagement with China on AI will undercut the U.S. ability to compete. They fear a scenario in which Chinese diplomats lead their well-intentioned U.S. counterparts through an endless maze of pointless negotiations designed to delay and degrade aggressive policy action. These hawks point to a long track record of futile efforts to engage with China on topics such as military crisis management that Washington believed were issues of mutual concern but Beijing saw as an opportunity to exploit U.S. desires for cooperation.
U.S. policymakers must take this history seriously and be vigilant against attempts to manipulate AI discussions in a similar way. Importantly, Washington should not try to woo Beijing with concessions on semiconductors to entice leaders to talk. But they also need to be confident in their ability to advocate for the U.S. national interest—and for China to do the same. Having a conversation about AI safety does not prevent the United States from doing everything in its power to limit Chinese AI capabilities or strengthen its own.
Even if such talks don’t undermine U.S. competitiveness, China hawks reasonably question what diplomacy can really accomplish. They are justifiably skeptical of the ability of the United States to shape decision-making within the Chinese Communist Party (CCP), which they correctly see as driven by the cold calculations of realpolitik (and increasingly clouded by the vagaries of ideology and strongman rule).
It’s true that the United States has no chance of simply convincing the CCP to take actions that it doesn’t believe are in its own interest. But it can introduce new, technically grounded information into the CCP’s calculations. If both U.S. and Chinese AI models are at risk of gaining dangerous capabilities that we don’t know how to control, it is a national security imperative that Washington communicate with Chinese leadership about this. Having these channels is an emergency option that must be kept open. It can help prepare for the situation no one wants: a great-power crisis entangled with powerful AI. In such a situation, having the most technically capable, security-aware individuals in touch with one another may be essential to pulling us back from the brink.
Chinese leaders will be similarly suspicious that U.S. efforts at diplomacy are just an effort to trick China into slowing down its progress. But the CCP does carefully listen to the advice of its leading AI scientists, and there is growing evidence that these scientists take frontier AI risks seriously. Many of China’s top scientists have joined their Western peers in calling for AI red lines. More recently, a government-affiliated technical think tank announced that 17 Chinese companies had signed on to a new set of commitments aimed at promoting the safe development of the technology. Whether or not China follows through with these measures remains to be seen. But it’s a promising indicator that China is concerned about AI risks.
Deep distrust between China and the United States makes any high-level agreement limiting the development of frontier AI systems nearly impossible at this time. But even in a zero-trust environment, there are still ways to make development of these systems safer.
If each country believes uncontrolled frontier AI threatens its national security, there is room for them to discuss limited, productive mechanisms that might reduce risks, steps that each side could independently choose to implement. Interlocutors should discuss best practices for maintaining human control over advanced AI systems, including testing and evaluation, technical control mechanisms, and regulatory safeguards.
Even discussing a carefully scoped set of risks can raise challenging, unsolved technical questions. Scientists are still trying to figure out how to build effective guardrails, and doing so will require an enormous amount of new funding and research. Even if they figure out how to control advanced AI systems, it is uncertain whether those techniques could be shared without inadvertently enhancing their adversaries’ systems.
But the United States has overcome such challenges before. During the Cold War, U.S. officials were concerned about the potential for accidental or unauthorized launches of nuclear weapons. The Cuban missile crisis in 1962 marked a turning point: U.S. officials became worried about whether officers in Cuba or leaders in Moscow would make nuclear launch decisions. This concern led the Kennedy administration to begin sharing nuclear safety technologies with the Soviet Union, starting with basic safety mechanisms called “permissive action links,” which were electronic locks that required codes to authorize nuclear launches. Should a potential solution exist to ensure the security of frontier AI systems today, understanding whether it could be safely shared would require extensive new research and dialogue with Beijing, both of which would need to begin immediately.
This point bears repeating. Knowledge is power, and across the board, the best tool the United States has for defending itself against AI’s risks is more information. A great deal of effort and resources should be directed toward the study of China’s rapidly emerging system of AI safety institutions and technical standards. To hedge against the worst, the United States needs to better understand the technical risks, how China views those risks, and what interventions can meaningfully reduce the danger in both countries.
Without a doubt, the debut of DeepSeek-R1 has been a wake-up call for Washington. Decisions made this year will shape the trajectories of frontier AI during a period of potentially extraordinary progress, one that brings with it enormous upside possibilities as well as potentially grave dangers.
The task ahead for the United States is daunting but critical. It must do everything it can to shape the frontier on its own terms while preparing for the possibility that China remains a peer competitor during this period of growth. Neglecting either objective would mean leaving the CCP entirely to its own devices on the critical decisions about AI safety and security. That’s an outcome Americans can’t afford.