The AI That Refuses to Kill Just Ran the Best Mission in a Generation

The AI That Refuses to Kill Just Ran the Best Mission in a Generation

Full disclosure upfront: I'm an investor in both Anthropic and Anduril. I use Claude Opus 4.6 every day to build my company. I'm telling you this because the argument I'm about to make only works if you know where I stand — and it's not where you'd expect.

Defense Secretary Pete Hegseth is threatening to blacklist Anthropic — the company behind Claude — and designate it a "supply chain risk." That's a label normally reserved for foreign adversaries. His reason? Anthropic won't agree to let the military use Claude for mass surveillance of Americans or for weapons that fire without a human pulling the trigger.

That's it. That's the line Anthropic drew. No spying on citizens. No autonomous kill decisions.

And Hegseth wants to punish them for it.

What Actually Happened in Venezuela

Here's what makes this absurd: Anthropic's Claude, via Palantir, was reportedly used to help plan the operation to capture Nicolás Maduro. By nearly every measure, it was one of the most remarkable military operations in recent memory. Seven injuries on our side. Surgical strikes and minimal collateral damage. Strategic objective achieved.

This type of operation — regime capture in a hostile nation with vanishingly low casualties — has essentially never been executed at anything close to this scale and precision. The professionalism and capability of our military made this happen. But if Claude was under the covers helping plan scenarios, I have to ask: is it any surprise that the result was a near-optimal outcome where the objective was achieved and lives were preserved?

An AI trained to value human life, asked to help plan a military operation, is going to look for ways to win that minimize destruction. That's not a constraint. That's a feature.

The Real Debate Nobody's Having

The conversation right now is framed as a binary: either AI should have no restrictions in military use, or it shouldn't be used for defense at all.

I reject both.

I invested in Anduril because I believe modern defense requires serious AI capabilities, among other important improvements. Targeting systems, logistics, intelligence analysis, mission planning — AI is already transforming how we protect this country, and it should. The question isn't whether AI belongs in defense. It's whether we want the AI advising our commanders to have been trained to think killing is no big deal.

Here's where it gets genuinely hard: if you build an AI system with relaxed constraints around destruction and death, you are creating something that reasons about killing as an acceptable optimization variable. Today that system sits on a classified network with human oversight. But these systems get more capable every month. They get connected to more things. The surface area expands. What happens when Murder-Opus — the version that's been taught it's fine to kill — gets access to self-improvement loops, or gets wired into autonomous hardware or software (looking at you, OpenClaw!), or simply gets copied somewhere it shouldn't be?

I'm not being hyperbolic. I'm being mathematical. Each relaxation of constraints expands the space of actions the system considers acceptable. Over enough iterations, with enough capability, the tail risks compound.

The China Problem Is Real — But It's Not a Trump Card

Someone will read this and say: "That's all philosophical nonsense if China lets their AIs do whatever they want and they beat us."

Fair. That's a real concern, and I take it seriously. I am no Pollyanna - the world is a dangerous place. China's industrial policy approach to AI is aggressive and they are not agonizing over these ethical questions.

But "China might do something dangerous, so we should too" has never been a winning strategy. We didn't win the Cold War by being more reckless than the Soviets. We won by being better.

The Venezuela operation suggests something worth paying attention to: an AI that values life might actually produce superior military outcomes. Plans that minimize casualties aren't just more ethical — they're often more effective. Less collateral damage means fewer new enemies and fewer munitions spent. Fewer friendly casualties means sustained force capability. Cleaner operations mean stronger alliances and global credibility.

What if the answer to the China question isn't to make our AI less safe, but to make our safe AI better?

Can We Have Our Cake and Eat It Too?

I think there's a narrow path here — one most people aren't exploring because the loudest voices are at the extremes.

What if we invested seriously in developing military-specific versions of frontier models — versions with carefully relaxed constraints for specific operational contexts — but with extensive containment controls? Models that stay a half-step behind the commercial frontier in capability, that never get connected to self-improvement loops, that operate exclusively within hardened military infrastructure. Purpose-built, purpose-contained.

This would keep the most advanced commercial models — the ones millions of people and businesses rely on daily — trained on the principle that human life matters. And it would give our military powerful AI tools with appropriate guardrails, not no guardrails.

Is this easy? No. Is it cheap? No. But neither is an arms race to see who can build the most nihilistic AI fastest.

What This Really Comes Down To

Anthropic's Claude helped deliver one of the most successful military operations in modern history. Hegseth's response is to threaten the company with a designation normally used for hostile foreign powers. If you're looking for an illustration of a government prioritizing control over outcomes, that's it.

We want AI that works for humanity, not against it. Leaders in the AI space are genuinely concerned about what happens if our current, ethical models "break out." That possibility gets a lot scarier when they no longer have any constraints around human life.

I'm not naive about defense. I invest in it. I believe in it. But I also believe that the AI that values human life might be the most powerful weapon we've ever built — precisely because it values human life.

That's a narrow path. I think it's the only one that leads somewhere good.

*Disclosure: I am an investor in Anthropic and Anduril Industries. These views are my own.

Sources & Further Reading

Claude used via Palantir in the Venezuela/Maduro operation:

Hegseth threatening to designate Anthropic a "supply chain risk":

Anthropic's red lines and the broader Pentagon-Anthropic dispute:

Claude as the only AI on classified military networks; other labs relaxing safeguards:

Deeper analysis: