Anthropic CEO Dario Amodei has raised alarms over potential industrial espionage targeting America’s leading artificial intelligence companies and urged the U.S. government to take action.
Speaking at a Council on Foreign Relations event on Monday, Amodei expressed concerns that foreign spies — likely from China — are attempting to steal high-value AI technology. He described the situation as a significant national security risk.
“Many of these algorithmic secrets, there are $100 million secrets that are a few lines of code,” Amodei stated. “I’m sure that there are folks trying to steal them, and they may be succeeding.”
Amodei pointed to China’s history of large-scale industrial espionage, saying companies like Anthropic are almost certainly among the targets. He emphasized the need for increased U.S. government support to counter these threats, though he stopped short of specifying what form that assistance should take.
While Anthropic declined to comment directly on Amodei’s remarks, the company referred to its recent recommendations submitted to the White House’s Office of Science and Technology Policy.
In the submission, Anthropic called for stronger collaboration between the federal government and AI industry leaders to fortify security at advanced AI research labs. The proposal suggested working alongside U.S. intelligence agencies and allied nations to enhance protections.
Amodei’s comments align with his previously voiced concerns about China’s AI development. He has been an advocate for strict U.S. export controls on advanced AI chips destined for China. He also referenced a recent test conducted by Anthropic, in which Chinese AI company DeepSeek received the lowest score on a bioweapons data safety evaluation.
Amodei’s stance has sparked debate within the AI community. Some experts argue that fostering greater collaboration between the U.S. and China could prevent an AI arms race, which they fear could lead to the creation of uncontrollable systems.
Amodei, however, remains firm in his position that China’s AI ambitions pose a serious risk, particularly if they are directed toward authoritarian or military applications.