“I Dropped One AI Provider After the Pentagon Deal — Here’s Which Two I Kept”


“I Dropped One AI Provider After the Pentagon Deal — Here’s Which Two I Kept”

The Red Lines — Anthropic vs Pentagon, February 2026
The day Anthropic refused a $200M Pentagon contract changed how I think about every tool in my stack.

By S | AI Creator & Developer, rural Japan


Hello, this is S.

I build things in a small town in Shimane Prefecture, Japan. No coworkers, no office, no commute. Just me, a laptop, and a stack of AI tools I depend on to ship work every week.

In late February 2026, something happened in Washington that forced me to think seriously about a question I had never considered before: do the ethics of the companies behind my tools actually matter to me?

The short answer is yes. But the longer answer changed how I work.


What actually happened

On February 27, 2026, the Pentagon gave Anthropic a deadline: remove two restrictions from your $200 million contract by 5:01 PM, or we cancel it and blacklist you.

The two restrictions were specific. Anthropic had written into its acceptable use policy that Claude could not be used for mass domestic surveillance of American citizens, and could not power fully autonomous weapons systems — ones where no human remains in the targeting and firing loop.

The Pentagon’s position was not that it intended to do these things. Its position was that no private company should have the right to prohibit the US military from doing anything lawful.

Anthropic refused to remove the restrictions. Dario Amodei said they “cannot in good conscience accede to their request.” The deadline passed. Trump ordered every federal agency to immediately stop using Anthropic’s products. Defense Secretary Hegseth designated Anthropic a “supply chain risk to national security” — a classification normally reserved for foreign adversaries.

Within 24 hours, Claude hit #1 on the US App Store. More than 1.5 million users joined the QuitGPT movement. A Reddit thread urging people to cancel ChatGPT crossed 33,000 upvotes. And OpenAI — whose CEO Sam Altman had publicly stated that morning that he shared Anthropic’s concerns — signed a Pentagon deal by that same evening.


Why I noticed this from a small town in Japan

I use Claude daily. For writing, for building the 観 (Kan) app I’m working on, for processing research, for structuring thoughts when they’re still too tangled to be useful. It is the primary tool in my stack.

I had never thought of my tool choice as a values decision. I thought of it as a performance decision. Which model produces better output for my use cases. That was the entire framework.

The Pentagon standoff broke that framework.

For the first time, I was watching a company refuse a $200 million contract — and the subsequent blacklisting, the government threats, the loss of federal customers — because it would not delete two sentences about mass surveillance and autonomous weapons. And I found myself thinking: I have been paying for this company’s product, and this is what they do with the leverage that implies.

That is not a neutral observation. It changed something.


The OpenAI comparison is instructive

Sam Altman said on the morning of February 27 that he shared Anthropic’s position. Hours later, his company signed a deal. He later admitted the timing “looked opportunistic and sloppy.”

MIT Technology Review’s analysis was direct: Anthropic pursued a moral approach that won it many supporters but failed. OpenAI pursued a pragmatic approach that was ultimately softer on the Pentagon.

OpenAI’s deal includes protections against autonomous weapons and mass surveillance — on paper. But the mechanism differs. Where Anthropic wrote hard contractual prohibitions, OpenAI deferred to applicable law and the assumption that the government would not break it.

The problem with that assumption is well-documented. Surveillance practices exposed by Edward Snowden had been deemed legal by internal agencies. “Lawful” has a narrower meaning than “ethical.” The contract language that sounds similar produces meaningfully different accountability.

I am not saying OpenAI’s position is wrong. I am saying the distinction matters, and I now understand it. Before February 2026, I did not.


What I actually changed in my stack

I did not stop using ChatGPT. I still use it for specific tasks where it performs better.

What changed is the weight I now assign to company behavior when I evaluate tools. It used to be 0%. It is now somewhere above zero — I cannot give it an exact number, but it influences decisions at the margin.

Concretely:

When I evaluate a new AI tool now, I read the acceptable use policy. I did not do this before. Most are boilerplate. Some are not.

When I have a choice between two tools with comparable performance, I give weight to whether the company behind it has demonstrated a willingness to absorb real costs in order to hold a stated position. Anthropic absorbed a blacklisting. That is a data point.

I also think more carefully about stack diversification now. Not because I expect any of my tools to be banned in Japan, but because the February events demonstrated that AI providers can be removed from entire ecosystems very quickly. Having a single-provider dependency is now a visible risk in a way it was not before.


The question this raises for every developer

The Anthropic-Pentagon standoff is the first time in my experience that the ethics of an AI company directly shaped user behavior at scale. Claude going to #1 on the App Store the day after the standoff was not a coincidence. People were voting with their subscriptions.

That means AI tool choice is no longer purely technical. It carries a social dimension. The tools you use signal something to collaborators, clients, and the communities you operate in — even if only implicitly.

I am not arguing that everyone should change their AI stack based on corporate ethics. The performance difference between frontier models is real, and for most developers, output quality is correctly the primary variable.

But I am arguing that pretending the ethics dimension does not exist is no longer accurate. February 2026 made it visible.


What I use now

My current stack, for transparency:

Primary: Claude (Anthropic) — writing, research synthesis, app development via Claude Code Secondary: ChatGPT (OpenAI) — tasks where GPT-4o outperforms Claude for my specific use cases Utility: Gemini — long-context document processing

I have not changed the stack, but I have changed how I think about it.

The tools I depend on are made by companies making real decisions about real tradeoffs. Some of those decisions I will agree with. Some I will not. But from now on, I will at least be paying attention.


I write about AI tools, content strategy, and building in public from rural Japan. If your company builds AI tools and wants to reach this audience, reach out: oliverblackwood717@gmail.com

note.com/oliver_wood | medium.com/@johnpascualkumar077


Tags: Artificial Intelligence AI Tools Tech Ethics Developer Tools OpenAI Anthropic Claude


コメント

コメントを残す

メールアドレスが公開されることはありません。 が付いている欄は必須項目です