BTC 72,807.00 +6.62%
ETH 2,134.55 +7.66%
S&P 500 6,869.50 +0.78%
Dow Jones 48,739.41 +0.49%
Nasdaq 22,807.48 +1.29%
VIX 21.15 -10.27%
EUR/USD 1.09 +0.15%
USD/JPY 149.50 -0.05%
Gold 5,151.60 +0.33%
Oil (WTI) 76.11 +1.94%
BTC 72,807.00 +6.62%
ETH 2,134.55 +7.66%
S&P 500 6,869.50 +0.78%
Dow Jones 48,739.41 +0.49%
Nasdaq 22,807.48 +1.29%
VIX 21.15 -10.27%
EUR/USD 1.09 +0.15%
USD/JPY 149.50 -0.05%
Gold 5,151.60 +0.33%
Oil (WTI) 76.11 +1.94%

OpenAI strikes deal with Pentagon following Claude blacklisting — Anthropic to challenge supply chain risk designation in court

| 2 Min Read
Altman’s announcement came not long after President Trump “ordered” every federal agency to immediately stop using Anthropic's technology.

OpenAI strikes deal with Pentagon following Claude blacklisting — Anthropic to challenge supply chain risk designation in court

Sam Altman looking pensive
(Image credit: Getty Images / Bloomberg)

OpenAI CEO Sam Altman announced late Friday night that the company had reached an agreement with the U.S. Department of Defense (“rebranded” as the Department of War under the current administration) to deploy its AI models on the Pentagon's classified network, with the same two safety conditions Anthropic was effectively blacklisted for insisting on: no domestic mass surveillance, and human oversight of decisions involving lethal force and autonomous weapons.

Altman’s announcement came not long after President Trump “ordered” every federal agency to immediately stop using Anthropic's technology, following weeks of tense negotiations between Anthropic and Pentagon officials that ultimately collapsed. The DoD had labeled Anthropic a supply chain risk and demanded that it drop restrictions on its Claude model, requiring the model to be available for "all lawful purposes." Anthropic refused. Hours later, the Pentagon accepted functionally identical conditions from OpenAI.

It’s understood that no formal contract between OpenAI and the Pentagon has been signed yet, and that the agreement also limits OpenAI's deployment to cloud environments, not edge systems such as aircraft or drones.

Anthropic argued that the law hasn't kept pace with what AI can do, particularly in aggregating publicly available data for surveillance purposes. Altman seemed to agree with this, stating in an internal memo to OpenAI staff that it shares Anthropic's "red lines" and wanted to help "de-escalate" the situation.

By Friday afternoon, however, he held a company all-hands meeting, telling employees the deal was taking shape. Around 70 OpenAI employees have separately signed an open letter titled "We Will Not Be Divided" expressing solidarity with Anthropic.

Anthropic was the first AI lab to deploy its models on the Pentagon's classified networks, through a partnership with Palantir. OpenAI had previously held a $200 million DoD contract for non-classified use cases. Anthropic said Friday it will challenge the supply chain risk designation in court, stating that "no amount of intimidation or punishment from the Department of War will change our position."

Google Preferred Source

Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

TOPICS
Luke James
Contributor

Luke James is a freelance writer and journalist.  Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory. 

  • wyldpea
    It's all about money. There's no more ethics. You may agree with Altman, but when they turn their sights on the American people, we'll see how well that goes! Skynet isn't so farfetched anymore.
    Reply
  • CelicaGT
    wyldpea said:
    It's all about money. There's no more ethics. You may agree with Altman, but when they turn their sights on the American people, we'll see how well that goes! Skynet isn't so farfetched anymore.
    Ethics in business died many decades ago. What you see here is a symptom, not the disease.
    Reply
  • Notton
    These "red lines" from OpenAI seem to be the same thing as "Guardrails" from Anthropic.

    I assume Altman got the deal because he's easier to ply and bend over.
    Reply
  • Jabberwocky79
    This smacks of a difficult client insisting on a contractor doing something a certain way even though they are told it isn't possible. The client gets mad, thinking they are being lied to, and they go and find someone else who ends up telling the them the exact same thing. The client has to concede the first guy was right but is too proud to go back and admit it, so the client just goes with the second guy even though nothing has changed.
    Reply

Comments

Please sign in to comment.
Rampagefang Market Intelligence