You are here
The ultimatum of AI News 

The ultimatum of AI

February 27th was a big day for the AI tech world. The Pentagon, the headquarters of the US Department of War, gave Anthropic an ultimatum.

To provide some context, Anthropic is a company that developed Claude, an AI much more powerful than ChatGPT. Since July 2025, Anthropic has been working with the Pentagon to incorporate AI into its defense strategies. According to Anthropic, working with the Pentagon is essential to defending the United States and to maintain America’s lead in the AI race.

AI is largely used by the Pentagon for “intelligence analysis, modeling and simulation, operational planning and cyber operations”.

The Department of War and Anthropic were working together through Palantir, another AI company, to facilitate the use of Claude in the headquarters. Anthropic had been awarded a 200 million dollars ceiling for a two year prototype agreement. All seemed to go well, until February, when the Pentagon started asking for more. Claude was the best AI available for military tasks, but it had some clear red lines regarding safety. The Pentagon seemed to have a problem with that. This led to an ultimatum, asking Anthropic to remove these barriers for lawful purposes.

Essentially, they wanted the AI company to allow the use of Claude for mass domestic surveillance and fully autonomous weapons. They gave Anthropic a total of 72 hours to make their decision. If they refused, the 200 million dollar deal would fall apart and Anthropic would be banned by the Department of War.

Mass domestic surveillance is used by the US to track its population movements and activities. It is extremely harmful as it endangers “intellectual privacy” and creates a power imbalance between the watcher and the watched. It is against the fourth amendment and is considered undemocratic. According to Anthropic, using large language models for that purpose is just wrong.

Then, using AI for fully autonomous weapons is very risky. AI is a tool that’s still in development and can make mistakes. In the statement published by Anthropic, the company states that AI is simply not ready to take on that responsibility and that it could ultimately harm civilians and American soldiers.

Because of those reasons, Anthropic refused to abide by the Pentagon’s request. They were banned and labelled as a supply chain risk, a fate usually only reserved for foreign companies.

Soon after that, the Pentagon made the same deal of 200 million $ with OpenAI, which received a lot of backlash from its users. The uninstalling rate of ChatGPT raised 200% shortly after the announcement. OpenAI decided to rectify the deal and add the same red lines as Anthropic.

What’s suspiscious about this whole argument is the fact that the same deal was made twice. Claude is objectively the better AI for military purposes, so why go with openAI for the same amount of money and the same red lines? From an outside perspective, it just seems like a bad deal. However, I think it’s deeper than that. We could speculate that there was a messy personal fight between the secretery of the war department and the ceo of anthropic. We could also more realistically speculate that the pentagon believes that OpenAI is most likely to remove these barriers than Anthropic. OpenAI only added those restrictions after backlash, so we could argue that they are ornamental and meant to be put down when the time is right.

What this whole dispute implies is that AI is now or in the near future going to be used in wars autonomously. AI is gaining power very fast although it is not qualified enough to do so. Is AI taking over the world much closer than we think?

Sources

  • Anthropic. “Statement on the Comments from Secretary of War Pete Hegseth.” Anthropic, 27 Feb. 2026,
  • Anthropic. “Where Things Stand with the Department of War.” Anthropic, 5 Mar. 2026,
  • BBC News. “OpenAI Changes Deal with US Military after Backlash.” BBC News, 2026,
  • Journal de Montréal. “Trump Ordonne à l’Administration Américaine de Cesser Immédiatement d’Utiliser l’IA d’Anthropic.” Journal de Montréal, 27 Feb. 2026
About The Author
newspaper

Related posts

Leave a Comment