The breakdown of the professional relationship between Anthropic and the Pentagon over a defense contract has gained widespread attention. Anthropic CEO Dario Amodei’s refusal to lift safeguards that would allow unrestricted military access to his company’s AI model has drawn sharp criticism from senior US Department of Defense officials and US President Donald Trump.
One of the grounds on which Amodei refused to renew the contract with the Department of Defense was the potential use of his AI model for fully autonomous weapon systems. In a recent interview with CBS News, the CEO elaborated on how the use of AI in such weapon systems could be catastrophic for human rights.
AI’s lack of human judgment raises concerns, according to Anthropic CEO
When asked by the interviewer about what could go wrong with AI’s use for military purposes, Amodei pointed out that an AI-integrated weapon system could shoot a civilian, as it lacks the judgment that a human soldier would have.
“It targets the wrong person, it shoots a civilian. It doesn’t show the judgment that a human soldier would show … We don’t want to sell something that we don’t think is reliable,” Amodei stated. He also pointed out that the model wasn’t trustworthy enough to rule out “friendly fire.” In that context, the CEO stated that he doesn’t want to sell something that could “get our own people killed or innocent people killed.”
In the same interview, Amodei stressed that his position on the Pentagon offer remained the same, and he was not going to budge on the “two red lines.” Notably, Amodei wanted safeguards against the use of Claude for mass domestic surveillance and fully autonomous weapon systems.
“We believe that crossing those red lines is contrary to American values, and we wanted to stand up for American values,” Amodei stated in the interview. However, he pointed out that he was willing to work with the United States Department of Defense as long as he was guaranteed that the red lines won’t be crossed.
Anthropic received a contract worth up to $200 million from the Department of Defense last year. Its AI model still has clearance for use in classified military missions. Reportedly, the company’s AI system has been used in the recent airstrike on Iran’s military infrastructure and leadership, as well as in the military operation involving the capture of Venezuelan President Nicolás Maduro. However, based on Trump’s recent directive, the US security apparatus will phase out Anthropic’s AI system within six months.
As things stand, U.S. Secretary of Defense Pete Hegseth designated the AI firm a “security risk” and directed all stakeholders who do business with the U.S. military not to “conduct any commercial activity with Anthropic.”
Now, it remains to be seen whether the Pentagon provides a new contract to Amodei’s company with the safety clauses in place, or if the doors of re-negotiation are permanently shut.
Read More: Donald Trump’s Top Defense R&D Official Calls Anthropic CEO A “Liar” With “God-Complex”

