Defense Secretary Pete Hegseth (l) and Anthropic CEO Dario Amodei|Gage Skidmore; TechCrunch|CC BY-SA 3.0; CC BY 2.0
Defense Secretary Pete Hegseth and Anthropic CEO Dario Amodei are reportedly meeting today to discuss the terms of military use of the AI company’s large language model.
At the heart of the conflict is Claude, currently the only AI model integrated into the military’s classified systems. The Pentagon relies on the LLM for sensitive intelligence work, and it was reportedly used in the US capture of former Venezuelan leader Nicolas Maduro.
Concerns grew when Anthropic refused to fully lift its safeguards. The company remains firm on walling off two specific areas: the mass surveillance of Americans and the development of autonomous weapons that fire without human intervention.
The clash prompted Pete Hegseth to brand Anthropic a supply-chain risk—a move that would force Pentagon contractors to drop its tools and gut much of its enterprise business.
The Anthropic CEO has often described the company as a safety-first AI firm. This ideology clashes sharply with the Pentagon’s demand that AI labs make models available for “all lawful uses.”
Pentagon officials say it won’t be a friendly meeting.
Anthropic landed a $200 million deal with the Department of Defense last year.
Rivals are circling. Alphabet Inc., OpenAI, and xAI have agreed to lift limits on military use of their AI models as they pursue classified Pentagon contracts.
Meanwhile, Anthropic has gone public with allegations that three prominent Chinese AI firms—DeepSeek, Moonshot AI, and MiniMax—used over 24,000 fake accounts and generated 16 million exchanges from Claude to improve their own models.