Uncategorized

Hegseth gave Anthropic a Friday deadline to hand over its AI for unrestricted military use, and the Pentagon’s backup plan is chilling

“Pete Hegseth” by Gage Skidmore is licensed under CC BY-SA 2.0.

The Pentagon doesn’t take no for an answer.

Defense Secretary Pete Hegseth met with Anthropic CEO Dario Amodei and gave him a hard deadline: open up the company’s AI for unrestricted military use by Friday, or risk losing its government contract. Anthropic is the last major AI company among its peers to hold back its technology from a new U.S. military internal network.

Amodei has repeatedly raised ethical concerns about unchecked government use of AI, particularly around autonomous armed drones and AI-assisted mass surveillance. In an essay last month, he wrote that a powerful AI “looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow.”

According to AP News, Pentagon officials warned that if Anthropic refuses, it could be labeled a supply chain risk, or the military could invoke the Defense Production Act to use Anthropic’s products without the company’s approval.

Anthropic is increasingly isolated as its AI peers fall in line with Pentagon demands

Hegseth’s push fits into his broader goal of removing what he calls “woke culture” from the armed forces. In a January speech at SpaceX, he said he was done with AI models “that won’t allow you to fight wars,” and declared that the Pentagon’s “AI will not be woke.” This latest move is consistent with Hegseth’s ongoing culture war actions that have drawn widespread attention in recent months.

The Pentagon had previously awarded defense contracts worth up to $200 million each to four AI companies: Anthropic, Google, OpenAI, and Elon Musk’s xAI. While Anthropic was the first approved for classified military networks, the others have shown more willingness to comply. 

OpenAI joined the secure but unclassified Pentagon AI network, GenAI.mil, in early February with a custom version of ChatGPT. 

xAI’s Grok chatbot has also announced readiness for classified settings. Owen Daniels, an associate director at Georgetown University’s Center for Security and Emerging Technology, noted that Anthropic’s peers, including Meta, Google, and xAI, have been willing to comply with the department’s policy on using models for all lawful applications. 

He believes this limits Anthropic’s bargaining power and risks the company losing influence as the Pentagon pushes to adopt AI more broadly. This standoff is not the first time Anthropic has clashed with the Trump administration. 

The company previously criticized Trump’s proposals to loosen export controls on AI chips to China. In October, Trump’s top AI adviser, David Sacks, accused Anthropic of “running a sophisticated regulatory capture strategy based on fear-mongering.”

His remarks came in response to co-founder Jack Clark’s comments about balancing optimism with what he described as “appropriate fear” surrounding advanced AI. Hegseth has also faced pushback on other fronts, including a case where a judge blocked Hegseth’s attempt to censure a senator over a troop video dispute.

Amodei has warned that “we are considerably closer to real danger in 2026 than we were in 2023.” Amos Toh, senior counsel at the Brennan Center’s Liberty and National Security Program at New York University, stressed that while the law struggles to keep up with technology, it doesn’t mean the Department of Defense has a blank check, especially if AI is used to surveil Americans.


Attack of the Fanboy is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more about our Affiliate Policy




Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button