How OpenAI Replaced Anthropic as US Government’s AI Supplier

OpenAI replaced Anthropic as primary AI supplier to US Government & federal agencies after Anthropic’s feud on dropping AI guardrails, popularising Claude
Amid ongoing conflict in Iran, reports say that Anthropic‘s AI models are playing a critical role in attack plans and the public fallout between the company and the US government has driven increased attention to the ethical use of AI for military purposes.
Anthropic’s refusal to tear down AI safeguards for use in autonomous weapons or surveillance has prompted the Pentagon, under US Secretary of Defense Pete Hegseth, to label the company a “supply chain risk”.
The AI pioneer said it will challenge the move in a court of law.
Following the dispute, Sam Altman’s OpenAI moved quickly to fill the gap, replacing Anthropic as a primary supplier of artificial intelligence tools for federal agencies, including parts of the Department of Defense.

The feud with the government massively increased the popularity of Claude, as it topped the App Store charts in the US and UK, snatching the throne from OpenAI‘s ChatGPT.
Users had started dumping ChatGPT following news of its work with the Pentagon.
CEO Sam Altman came forward with clarifications saying that the app would not be “intentionally used for domestic surveillance of US persons and nationals”.
Following the backlash, OpenAI says it has worked with the government to add additional language to the agreement.
The company claims: “We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s.”
Federal Departments adopt new systems
Federal departments such as State, Treasury and Health and Human Services have discontinued use of Anthropic’s Claude platform, adopting OpenAI’s GPT-4.1 and other AI models instead.
The State Department confirmed its in-house chatbot, StateChat, has switched to OpenAI technology, ensuring continuity of AI-assisted services.
The replacement of Anthropic extends to defence contractors.
Palantir, which integrated Claude into its Maven Smart Systems platform, was instructed to remove Anthropic’s AI.
Other firms using Claude for intelligence and decision-making workflows are making similar adjustments.
Industry groups have raised concerns about the unusual “supply chain risk” designation, warning it may disrupt procurement and slow innovation.
Investors in Anthropic have also expressed frustration, according to a BBC report, noting that while the company’s safety-focused approach is principled, it comes at a commercial cost.
OpenAI outlines usage restrictions
While the shift from Anthropic to OpenAI highlights tensions between government needs and AI ethics, the latter says it has firm red lines on use as well.
The statement reads: “We have three main red lines that guide our work with the DoD, which are generally shared by several other frontier labs:
- No use of OpenAI technology for mass domestic surveillance
- No use of OpenAI technology to direct autonomous weapons systems
- No use of OpenAI technology for high-stakes automated decisions (e.g. systems such as ‘social credit’).
“Other AI labs have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments. We think our approach better protects against unacceptable use.”
OpenAI’s new role as a government AI supplier could signal a pivotal moment in US AI policy, shaping the future of ethical standards, security considerations and technology procurement in federal agencies.
