
Photo by TechCrunch, licensed under CC BY-SA 2.0
QuitGPT is gaining rapid momentum.
OpenAI has officially inked a deal with the US Department of War, and ChatGPT users are absolutely furious about it, as reported by TechRadar. We’re seeing a massive backlash, with many users canceling their subscriptions and looking for alternatives after this controversial move.
This whole situation is particularly interesting because it comes right after another major AI developer, Anthropic, actually walked away from a similar agreement with the US military. Anthropic cited serious safety and security concerns, especially when it came to using its AI tech for “mass surveillance” and “fully autonomous weapons.” They really wanted safeguards in place for those areas, but the Department of War just wasn’t willing to agree to them.
Now, with OpenAI stepping in where Anthropic wouldn’t, the internet is buzzing. It’s been reported that a growing number of people are pulling the plug on their ChatGPT subscriptions. Some Redditors are even posting detailed guides on how to extract your data and completely remove yourself from ChatGPT. Folks are accusing OpenAI of having “no ethics at all” and basically “selling their soul” by allowing their AI models to be used by the US military complex.
It seems like people are voting with their wallets and choosing an AI that aligns more closely with their ethical concerns
Adding another layer to this, tech investor Aidan Gold pointed out on X that OpenAI had actually backed Anthropic’s safety stance before they went and signed their own deal with the Department of War. Meanwhile, the US government isn’t too happy with Anthropic’s refusal either, announcing its intention to remove Claude from all its departments.
Honestly, the ethics of AI have always been a bit murky. Most of the popular chatbots we use today were trained on mountains of copyrighted work that was often stolen. They also bring with them the very real threat of triggering mass job redundancies, and they gobble up vast amounts of energy to run. So, when you add military applications to that list, it just cranks up the ethical questions to eleven.
OpenAI, for its part, claims that its deal with the US military “has more guardrails” than the one Anthropic rejected. They’re talking about “red lines” they plan to enforce, specifically around mass surveillance and fully autonomous weapons. However, ChatGPT users aren’t buying it, especially when they see the “all lawful purposes” language used in the agreement. That phrase can be a huge red flag for a lot of people, and it leaves a lot of room for interpretation.
This debate isn’t going to disappear anytime soon, but in the immediate aftermath, we’re already seeing some significant shifts. Anthropic’s Claude chatbot has actually shot up to the top spot in the Apple App Store, which is a pretty clear indicator of where users are migrating to.
Source link





Published: Mar 2, 2026 04:00 pm