Who Are You Going Into Business With?

Who Are You Going Into Business With?

The question every business owner should be asking about their AI stack — and what the last 72 hours made impossible to ignore.


Full disclosure upfront, same as always: I'm an investor in both Anthropic. I use Claude every day to build my company. You deserve to know that. What follows is what I actually believe.


The NSA is using Anthropic's most powerful model.

The same Department of Defense that declared Anthropic a "supply chain risk" — a label normally reserved for foreign adversaries — is running Mythos Preview inside the NSA to scan for exploitable vulnerabilities in critical infrastructure. The Pentagon is simultaneously arguing in court that Anthropic threatens national security and deploying its most advanced tool to protect national security.

That's not irony. That's the market telling you who actually built the best product.

Three days ago, Dario Amodei walked into the West Wing to sit down with the White House chief of staff and the Treasury Secretary. Both sides called it "productive." The President, asked about it on a runway in Phoenix, said: "Who?"

Welcome to the most important story in technology that most business owners aren't paying attention to.

Two Leaders, Two Playbooks

I wrote in February about the Venezuela operation — how Claude, with every safety guardrail intact, reportedly helped plan one of the most precise military operations in modern memory. Seven injuries on our side. Surgical. I argued then that an AI trained to value human life might actually produce superior outcomes, not just more ethical ones.

Two months later, the evidence got louder. Anthropic built a model so capable at finding software vulnerabilities that it decided the responsible thing to do was not release it broadly. Mythos Preview went to roughly 40 vetted organizations — defensive cybersecurity teams, government agencies, and critical infrastructure operators. Anthropic calls this Project Glasswing. The company simultaneously launched Opus 4.7 for everyone else, with safeguards that detect and block high-risk cybersecurity uses, specifically so they could learn how to eventually deploy Mythos-class power at scale without handing a roadmap to every attacker on Earth.

That's a CEO looking at the most commercially valuable model his company has ever built and saying: not yet. Not until we know how to do this right.

Now contrast that with what's been happening at OpenAI.

Sam Altman's investors are openly questioning whether he's the right person to take the company public. The Wall Street Journal reported that Altman recently asked OpenAI's board to lead a funding round for Helion Energy — a nuclear fusion startup where Altman is a major shareholder. He floated having OpenAI acquire Stoke Space, another company in his personal venture portfolio. He told a podcast he was "zero percent" excited to be CEO of a public company.

OpenAI changed its mission statement six times in nine years. The word "safely" was removed from its most recent IRS filing. The company dropped its profit cap — the mechanism that was supposed to ensure that if AGI arrived, most of the value would flow to humanity rather than shareholders. The nonprofit that was supposed to oversee everything now holds a 26% stake. Microsoft holds 27%.

Decisions reveal values. And values end up inside the product.

Why This Matters to You

If you're running a business — the kind of business I work with, 20 to 500 employees, real customers, real operations — you might think the Pentagon-Anthropic standoff is Washington theater that has nothing to do with you.

You'd be wrong.

When you pick an AI vendor, you're picking a partner. You're handing over context about your business, your customers, your strategy, your communications. The model that processes your data was trained by people who made choices about what to optimize for. Those choices show up in how the model reasons, what it prioritizes, and what it's willing to do.

A company that restricted its own most powerful product because the security implications weren't resolved is making a different bet than a company that removed the word "safety" from its mission statement while it is working on incorporating ads into its product and its CEO shops his personal investments to the board.

I know who I want to be working with. The difference matters, and you should be paying attention to it.

The Narrow Path Gets Wider

I wrote in February about a "narrow path" — the idea that we could have powerful AI that serves defense needs and values human life, without having to choose. That we didn't have to pick between capability and conscience.

The last eight weeks have been the best evidence yet that the path exists.

Anthropic built a model with genuine offensive cyber capability — the kind of thing that would make any defense hawk sit up. And they deployed it to the people who need it most while actively working to prevent it from being weaponized against the rest of us. The NSA is using it. CISA is testing it. The UK's AI Security Institute has access. And Anthropic is documenting every vulnerability, running coordinated disclosure, and building the deployment playbook in real time.

That's not safety theater. That's a company stress-testing whether you can have immense power and responsible deployment simultaneously. And so far, the answer appears to be yes.

This isn't even asking CEOs to forego profits - quite the opposite. Dario's approach to this step-change model A) exposed the risks through their extensive adversarial testing program, and B) implemented a logically-designed program to deal with them. The irony is that this likely is the most long-term profit-maximizing move - doing the right thing and winning in the marketplace.

Meanwhile, the Pentagon officials who wanted Claude without any restrictions are watching the NSA use a far more powerful Anthropic model — one that exists precisely because the company invested in the kind of careful development they were trying to force Anthropic to abandon.

The guardrails didn't make Anthropic weaker. They made it essential.

The Question

Every technology decision is a bet on who you trust. Not just with your data today, but with the trajectory of what they're building.

So here's the question I'd ask any business owner considering their AI stack:

Who are these people? What do they do when the stakes get real? When they have something the government desperately wants, do they hand it over unconditionally — or do they negotiate terms that reflect what they actually believe? Are they willing to make the tough call to live out their values?

When Anthropic had the most capable cyber model ever built, they restricted access to 40 organizations and went to the White House to discuss responsible deployment.

Meanwhile OpenAI's CEO was asking the board to fund his fusion startup.

You're not just choosing a model. You're choosing who you're going into business with.

Choose carefully.


Disclosure: I am an investor in Anthropic. These views are my own.


Sources

NSA using Mythos despite Pentagon blacklist:

  • Axios: "Scoop: NSA using Anthropic's Mythos despite Defense Department blacklist" (April 19, 2026)

Dario Amodei / White House meeting:

  • CNN: "CEO of blacklisted Anthropic and White House hold 'productive' discussions on AI" (April 17, 2026)
  • CNBC: "Anthropic's Dario Amodei to meet with White House about Mythos" (April 17, 2026)
  • Axios: "Scoop: Bessent and Wiles met Anthropic's Amodei in sign of thaw" (April 17, 2026)

Mythos / Project Glasswing capabilities:

  • Anthropic Research Blog: "Claude Mythos Preview" (red.anthropic.com)
  • Fortune: "Exclusive: Anthropic 'Mythos' AI model representing 'step change'" (March 26, 2026)
  • CNBC: "Anthropic rolls out Claude Opus 4.7" (April 16, 2026)

Pentagon-Anthropic dispute history:

  • Previous DOS posts: "The AI That Refuses to Kill Just Ran the Best Mission in a Generation" (Feb 23, 2026) and "Hegseth Is About to Make America's AI Worse to Prove a Point" (Feb 27, 2026)

OpenAI structural changes and leadership questions:

  • Gizmodo / WSJ: "OpenAI Investors Aren't Sure Sam Altman Is the Guy to Take Them Public" (April 17, 2026)
  • Fortune: "OpenAI changed its mission statement 6 times in 9 years, removing AI that 'safely benefits humanity'" (February 23, 2026)