Sam Altman’s bizarre admission rocks OpenAI after a crucial deal, confessing he ‘shouldn’t have rushed’ a decision that sparked immediate outrage

Photo by TechCrunch, licensed under CC BY-SA 2.0
OpenAI CEO Sam Altman just dropped a bombshell, admitting on Monday that the company “shouldn’t have rushed” its recent deal with the U.S. Department of Defense, as reported by CNBC. He also outlined some crucial revisions to the agreement, which is a big deal after all the fuss it caused.
Altman shared what he called a repost of an internal memo on X, letting everyone know that OpenAI is going to amend the contract. The goal is to include some new language that really clarifies their principles, especially when it comes to things like surveillance. This includes a very clear statement that “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”
The memo even added that “the Department understands the limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.” That’s some serious clarification right there, and frankly, it’s something I think many of us were looking for.
Altman stated, “There are many things the technology just isn’t ready for, and many areas we don’t yet understand the tradeoffs required for safety.” He added that OpenAI plans to work closely with the Pentagon on technical safeguards.
Now, why the big admission? Altman himself confessed he made a mistake, stating he “shouldn’t have rushed” to get the deal out last Friday. He explained, “We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”
The original deal between the ChatGPT maker and the Defense Department was announced, just hours after President Donald Trump told federal agencies to stop using tools from rival AI company Anthropic. It also happened hours before Washington carried out strikes on Iran. This timing definitely raised some eyebrows and probably contributed to that “opportunistic and sloppy” perception Altman mentioned.
This whole situation is even more complex when you look at the background with Anthropic. See, Anthropic and Washington were in a public feud over safeguards for its Claude AI systems, and those talks actually ended without an agreement. Defense Secretary Pete Hegseth even said on Friday that Anthropic would be designated a “supply-chain threat.”
Anthropic was actually the first AI lab to deploy its models across the Defense Department’s classified network after an initial deal last year. However, they later sought guarantees that their tools wouldn’t be used for domestic surveillance in the U.S. or to operate autonomous weapons without human control.
The dispute really kicked off after it came out that Anthropic’s Claude had been used by the U.S. military in a raid to capture Venezuelan president Nicolás Maduro in January. Even though Anthropic didn’t publicly object to that specific use, it set the stage for their later demands. It’s pretty wild that OpenAI’s deal came right after Anthropic’s talks broke down, especially since Altman had told employees in a Thursday memo that OpenAI shared the same “red lines” as Anthropic.
OpenAI had also said on Friday that the Defense Department agreed to its restrictions. It’s still a bit of a head-scratcher why the Defense Department agreed to accommodate OpenAI but not Anthropic, especially since government officials have been criticizing Anthropic for months for allegedly being too concerned with AI safety.
The timing of OpenAI’s deal sparked quite a bit of online backlash, with many users reportedly ditching ChatGPT for Claude on app stores. Altman addressed this controversy directly in his post, saying, “In my conversations over the weekend, I reiterated that Anthropic should not be designated as a [supply chain risk], and that we hope the [Department of Defense] offers them the same terms we’ve agreed to.”





Published: Mar 3, 2026 04:30 pm