Anthropic CEO Dario Amodei said the company is still open to talks but wants stronger safeguards in writing|TechCrunch|CC BY 2.0
Anthropic CEO Dario Amodei said the company “cannot in good conscience accede” to the US military’s unrestricted access to its AI tools, deepening a public fight with the Department of Defense.
Anthropic, which makes the AI chatbot Claude, said the new contract language does not clearly block the use of its technology for mass surveillance of Americans or fully autonomous weapons. The company’s internal policies prohibit both uses. Amodei said Anthropic remains open to talks but wants stronger safeguards written into the agreement.
Defense Secretary Pete Hegseth gave Anthropic a Friday deadline to comply or risk losing its contract. The Pentagon has also asked Boeing and Lockheed Martin to assess their reliance on Claude.
The Defense Department already works with OpenAI, Google, and xAI. Spokesman Sean Parnell said the military uses AI only for lawful purposes and will not allow companies to dictate operational decisions.
Claude currently operates inside the military’s classified systems through a partnership with Palantir Technologies.
Competitors, including xAI, Google, and OpenAI, are pushing for classified military contracts.
Meanwhile, Nvidia CEO Jensen Huang told CNBC that he hopes the Pentagon and Anthropic sort things out, and if not, it is “not the end of the world.”