Hegseth Is About to Make America's AI Worse to Prove a Point
Three weeks ago, we learned that Anthropic's Claude — operating through Palantir's platform — was used in the planning of the operation to capture Nicolás Maduro. Fewer than 200 troops. Seven injuries. A sitting dictator extracted from a hostile nation in what multiple analysts have called one of the most precisely executed military operations in modern history.
Claude did this with every single one of its safety guardrails intact.
No special military version. No relaxed constraints. The same AI that won't generate instructions for bioweapons helped plan an operation so clean it will be studied at war colleges for decades. The model that values human life helped design a mission that preserved it — on both sides — while still achieving the objective.
That should have been the end of the debate.
Instead, as of 5:01 PM today, the Pentagon will either cut ties with Anthropic or invoke a Cold War-era law to force them to strip those guardrails out. The crime? Anthropic wants two things in their contract: don't use Claude for mass surveillance of Americans, and don't give it autonomous kill authority with no human in the loop.
That's it. That's the whole dispute.
And if not resolved, it will cost American lives.
I want to be clear about where I'm coming from. I'm an investor in both Anthropic and Anduril Industries. I am not a dove. I believe AI belongs in defense — deeply, seriously, and soon. I also think "woke AI" that refuses to discuss uncomfortable topics or generates ahistorical nonsense is a real problem. Google's Black George Washington was an embarrassment, and dangerous. AI models should be truthful, work with the subjects you want, and do the things you need.
But what Anthropic is asking for isn't "woke AI." Mass surveillance of US citizens is already illegal under the Fourth Amendment. Keeping a human in the loop on kill decisions is already DoD policy under Directive 3000.09. Anthropic is asking the Pentagon to commit in writing to things the Pentagon claims it already does.
So why the standoff? Because this was never about capability. It's about control. And the irony is staggering: the evidence that Claude's guardrails impair military operations doesn't just not exist — it runs in the exact opposite direction. The most constrained frontier AI just helped produce the most precise major military operation any of us have seen.
Think about why that might be. An AI trained to value human life, asked to help plan a military operation, is going to optimize for plans that achieve the objective and minimize destruction. It's going to look for the approach with fewer casualties, less collateral damage, smaller footprint. Not because it's been told to — because that's how it thinks. And in Venezuela, that thinking appears to have produced a result that was not just more ethical but more effective. Fewer troops meant a smaller signature. Fewer casualties meant no political blowback. Cleaner execution meant stronger alliances.
The AI that values life didn't constrain the mission. It may have been what made the mission possible.
Now here's what happens if Hegseth follows through today: the military loses access to Claude — the only frontier AI currently on classified networks. Their replacement? Elon Musk's Grok, which agreed to unrestricted terms but which nobody in the industry considers competitive with Claude. That's like pulling the surgeon who just performed a flawless operation and replacing them with someone who'll do whatever you ask but hasn't finished residency.
And here's what makes today genuinely different from two weeks ago. Sam Altman — Dario Amodei's fiercest rival — just publicly backed Anthropic's position. He told CNBC he doesn't think the Pentagon should be threatening these companies, and said OpenAI would draw the same red lines in any classified deal. Over 300 Google employees and 60+ OpenAI employees signed a letter titled "We Will Not Be Divided," urging their companies to stand with Anthropic.
When your competitors rally to your defense, you're not being unreasonable. The other side is.
CSIS analyst Gregory Allen confirmed what the operators already know: Claude's restrictions have never once been triggered in real military operations. The people actually using the tool love it. This fight isn't coming from warfighters — it's coming from officials who want to establish the principle that no company sets terms with the Pentagon.
Amodei's response to the threats was perfect: the Pentagon is simultaneously threatening to label Anthropic a supply chain risk (meaning they're dangerous) and invoke the Defense Production Act to force them to hand over their technology (meaning they're essential). Pick one.
I keep coming back to Venezuela. Claude — guardrails and all — helped deliver one of the cleanest military wins in a generation. And the Pentagon's response is to threaten the company that built it with a designation normally reserved for hostile foreign powers.
The question isn't whether AI should be used in defense. Of course it should. The question is whether we want the best AI advising our commanders — the one that just proved what it can do — or a worse AI with no guardrails at all.
Because that's the actual choice being made today at 5:01 PM.
Disclosure: I am a shareholder in Anthropic and Anduril Industries. These views are my own.