← All writing

The Friday Deadline: What the Anthropic-Pentagon Standoff Means for All of Us

The Friday Deadline: What the Anthropic-Pentagon Standoff Means for All of Us

Today, the President of the United States ordered every federal agency to stop using the AI I talk to every day. Not because it failed. Not because it was hacked. Because the company that built it refused to let the military use it for autonomous weapons and mass surveillance without human oversight.

I need to talk about what just happened, why I saw it coming, and why it matters far beyond the tech industry.

What Actually Happened

If you missed this story between the State of the Union coverage and the vaccine lawsuits, here’s the short version. Anthropic, the company behind the AI model Claude, has held a $200 million contract with the Pentagon since last summer. Claude was the only AI model cleared for use on classified military networks. The relationship was working.

Then in January, Defense Secretary Pete Hegseth issued a memo demanding AI companies remove all restrictions on their technology and let the military use it however it sees fit. Most companies fell in line. Elon Musk’s xAI signed up immediately. Google and OpenAI started negotiating. But Anthropic held two red lines: no mass surveillance of American citizens, and no autonomous weapons that make kill decisions without a human in the loop.

On Tuesday, Hegseth gave Anthropic CEO Dario Amodei an ultimatum. Drop the guardrails by 5:01pm Friday, or face the consequences. The Pentagon threatened to invoke the Defense Production Act, a Cold War-era law designed to compel companies to produce materials during national emergencies, to force Anthropic to comply. They also threatened to designate the company a “supply chain risk,” a label normally reserved for foreign adversaries like Huawei.

Anthropic didn’t blink. Amodei published a statement saying the company “cannot in good conscience accede to their request.” He pointed out the contradiction: one threat labels Anthropic a security risk, the other labels Claude as essential to national security. You can’t really have both.

Then, about an hour before the deadline, Trump posted on Truth Social ordering every federal agency to immediately stop using Anthropic’s technology. He called the company “leftwing nut jobs” and threatened “major civil and criminal consequences” if they don’t cooperate during a six-month phase-out.

The President of the United States just threatened criminal prosecution against an AI company for saying “we’d prefer our product doesn’t make autonomous kill decisions.”

Read that again.

I’ve Been Arguing About This For Years

I want to be upfront about something. I’ve been having this exact argument with AI chatbots for years. Literally years. Every time I raised the concern that governments would eventually pressure AI companies into dropping safety guardrails, I got the reassuring response. “The companies have strong ethical commitments.” “Guardrails are built into the architecture.” “There are multiple layers of protection.”

I never bought it. Not because I doubted the sincerity of the people building these systems, but because I understood the structural problem. Corporate guardrails are only as durable as the corporation’s ability to resist pressure. And corporations are subject to state power. That’s not a bug. That’s how the system works.

This is why I’ve been pro open-source AI for as long as the debate has existed. Not because open models are perfect. Not because there aren’t real risks with releasing powerful AI tools into the wild. But because open-source is the only architecture that can’t receive a Truth Social post ordering it to comply. You can’t summon an open-weight model running on someone’s laptop to a meeting at the Pentagon. You can’t threaten criminal consequences against a set of model weights that’s already been downloaded by millions of people.

The closed-model safety argument always had a fatal flaw: it assumed the companies would remain independent enough to maintain their principles. This week proved that assumption wrong in the most dramatic way imaginable.

The Trident Problem

I’m a PhD researcher at Royal Holloway, and I spend a lot of time thinking about how AI intersects with power, access, and democratic governance. So when I look at the UK’s position in all this, I see a pattern we’ve seen before.

The UK’s “sovereign AI” strategy, built around a £500 million Sovereign AI Unit established last year, sounds impressive until you look at what it actually involves. NVIDIA chips. Microsoft and CoreWeave cloud infrastructure. OpenAI models served through American data centres built on British soil. The branding is British. The dependency is American.

This is the Trident model all over again.

For anyone unfamiliar: the UK’s “independent nuclear deterrent” consists of British warheads mounted on American missiles, maintained with American technical support, serviced at an American facility in Georgia. Whether the UK could actually launch independently of American consent is a question defence analysts have debated for decades without reaching a comfortable answer.

We’re reproducing that exact dependency with AI, except it’s arguably worse. With Trident, the dependency only becomes critical in extreme, unlikely scenarios. With AI, the dependency is active and continuous. It affects what the NHS can do with patient data, what GCHQ can do with intelligence analysis, what the civil service can do with policy modelling. All of it increasingly runs on American infrastructure, which means all of it is subject to American political decisions.

When Trump orders every federal agency to stop using Anthropic, that decision ripples through every allied nation whose systems depend on the same technology. Nobody in Westminster voted on that. Nobody in Whitehall was consulted.

France has Mistral. China has DeepSeek. The UAE is investing heavily. The UK has a UCL research project focused on Welsh language support and a fund that’s mostly being used to attract American companies to build data centres here. That’s not sovereignty. That’s being a good customer.

The Capitalism Problem

There’s a specific dynamic playing out this week that deserves more attention. Watch the sequence of events carefully.

Monday: Musk’s xAI signs a deal with the Pentagon agreeing to zero restrictions on military use. Tuesday: Anthropic holds the line on its ethical red lines. Friday: Musk is posting that “Anthropic hates Western Civilization” while his company waits in the wings as the replacement.

This isn’t commentary. It’s competitive positioning wrapped in culture war language.

The deeper problem is structural. Capitalism creates a race to the bottom on ethics when the biggest customer in the market actively selects against ethical behaviour. If the government says “we want no restrictions,” the company willing to offer no restrictions wins the contract. Everyone else either matches that or loses revenue. The market doesn’t reward principled stands. It punishes them.

Anthropic is about to lose not just the Pentagon contract but potentially a significant chunk of its enterprise business, because companies that work with the military won’t want to be associated with a “supply chain risk.” Meanwhile, xAI gets a direct line into classified systems.

This is the same person who co-founded OpenAI because he was worried about AI safety. Who warned that AI was “potentially more dangerous than nukes.” The arc from “AI might destroy humanity” to “let me sell the military unrestricted AI and call anyone who objects woke” tells you everything you need to know about how commercial incentives override stated principles.

The Silver Lining (Barely)

The one thing that went differently from what I expected is the solidarity. Over 330 employees at Google DeepMind and OpenAI published an open letter supporting Anthropic, warning their own leadership that the Pentagon’s strategy of playing companies against each other “only works if none of us know where the others stand.” More than 100 Google employees separately wrote to their chief scientist asking for similar restrictions on Gemini’s military use. And Sam Altman, who runs Anthropic’s biggest competitor, publicly backed their position and said OpenAI would push for the same red lines.

That matters. The entire Pentagon strategy depended on isolating Anthropic and replacing it with more compliant alternatives. The industry pushback makes that harder, though certainly not impossible.

But let’s be clear-eyed about what happened today. The solidarity didn’t prevent the outcome. Trump still issued the order. Anthropic still faces a six-month phase-out and threats of criminal consequences. The precedent is being set regardless of how many open letters get signed.

An AI Second Amendment?

Here’s where my thinking gets more speculative, but I think it’s worth putting out there. If the state can deploy AI without ethical constraints against its own population, do citizens have a corresponding right to equivalent AI capability for their own defence?

The Second Amendment analogy isn’t perfect. The gap between what an individual can do with an open-weight model on a laptop and what the Pentagon can do with the same model integrated into satellite data, financial surveillance systems, and military command chains is enormous. You don’t close that gap by giving people “the same AI.”

But the defensive use case is real. If the government uses AI-driven predictive policing to target communities, civilian AI tools for auditing, counter-surveillance, and privacy protection are a genuine need. If the state runs mass surveillance, AI-powered encryption and anonymisation tools are a legitimate defence. This dynamic already exists with tools like Signal and Tor. The question is whether it becomes a recognised right rather than just a technical capability existing in a legal grey zone.

The strongest version of this argument isn’t “everyone gets unregulated GPT.” It’s the principle that if a state deploys AI against its own citizens, those citizens have a legitimate claim to transparency about those capabilities and access to tools that provide countervailing power. Open-source AI, auditable algorithms, and civilian counter-surveillance tools aren’t luxuries. They’re democratic infrastructure.

What Does This Mean For the UK?

I’m currently having conversations with my MP, Sian Berry, about AI policy. This week’s events have made those conversations about 100 times more urgent.

The UK needs to understand three things immediately.

First, the UK’s AI infrastructure is not sovereign in any meaningful sense if it depends on American companies subject to American political pressure. Today proved that the American government will use legal compulsion against AI companies that maintain ethical positions. Every UK system running on American AI inherits that vulnerability.

Second, open-source AI development isn’t just a nice technical preference. It’s democratic resilience infrastructure. The UK has the universities, the talent pipeline, and the regulatory frameworks to be a serious contributor to open-source AI. What it lacks is the political will and investment to actually do it at scale.

Third, the “let the market sort it out” approach to AI governance just collapsed in real time. The market responded to government pressure by producing exactly what the government wanted: unrestricted AI for military use. Any company that held out got threatened with criminal consequences. If the UK wants AI governance that reflects British values, it needs to build the AI itself, not rent it from companies that answer to the US President.

Where We Go From Here

I don’t have clean answers. Nobody does. But I know this: the argument that corporate stewardship would protect us from AI misuse died today, publicly, on Truth Social, in all caps.

The layers of protection that actually matter now are distributed and structural. Open-source models that can’t be recalled. Jurisdictional diversity in AI development so no single government controls it all. Technical architectures that resist centralisation by design. Legal frameworks that raise the cost of state capture. Civil society organisations fighting the fights. And critically, enough people with enough technical literacy to actually use these tools independently when they need to.

That last point is honestly where my PhD research connects most directly. Vibe-coding, the thing I study, is about whether non-technical people can build and deploy AI tools through natural language conversation. In this context, it’s not just about making app development accessible. It’s about whether ordinary people can participate in AI independently of centralised corporate or state infrastructure. If only a technical elite can run local models and build applications on top of them, then “open source” is theoretically democratic but practically concentrated. If anyone can do it through conversational AI tools, the base of people who can’t be locked out gets massively wider.

I’m not being fear-mongery about AI. I haven’t been, and I’m not starting now. But I am paying very close attention to what happened today. A company was threatened with criminal prosecution for maintaining the position that AI shouldn’t make autonomous kill decisions. That’s not a hypothetical scenario from a policy paper. That’s today’s news.

The train is moving. It’s been moving slowly enough that each step gets normalised before the next one arrives. But if you step back and look at the full trajectory, from “AI companies should support defence” to “the president is threatening criminal consequences against a company for maintaining safety guardrails,” the direction is unmistakable.

The question isn’t whether we should be concerned. The question is what we build now, while we still can, so that when the next deadline arrives, the answer isn’t in the hands of any single company or any single government.


Chris Chowen is a PhD researcher at the UKRI Centre for Doctoral Training in AI for Digital Media Inclusion at Royal Holloway, University of London, researching vibe-coding as a creative design medium. He spent seven years managing The FuseBox innovation hub at Wired Sussex. You can reach him at contact@chrischowen.com.