Who Is Your AI Working For?

Who Is Your AI Working For?

The most important question in AI right now isn't which model is smartest. It's whose interests it serves when you ask it for help.


Anthropic ran four ads during Sunday's Super Bowl. If you haven't seen them, the concept is simple: a person asks an AI chatbot a real, vulnerable question — how to communicate with their mom, how to get in shape — and gets genuine advice that suddenly pivots into a sponsored pitch for a cougar dating site or height-boosting insoles. The tagline: "Ads are coming to AI. But not to Claude."

They're funny. They're also the most important 60 seconds of tech advertising I've seen in years.

Not because of the brand rivalry — though watching Sam Altman call the ads "clearly dishonest" while defending his own plan to put ads inside ChatGPT was entertaining. It was yet another unflattering window into Sam's ethics.

And not because of the production quality, though the eerie pauses before each sponsored pivot genuinely made my skin crawl.

They matter because they force a question that most people building with AI haven't thought about yet: when you rely on an AI for real decisions, who is it actually optimizing for?

The word "ads" undersells the problem

OpenAI has said their ads will be "clearly labeled" and won't influence ChatGPT's responses. Google has officially denied plans to put ads in Gemini, though Adweek reported they've already briefed advertisers on a 2026 rollout, and they're already running ads in AI Overviews and testing them in AI Mode. The trajectory here is not subtle.

But "ads" makes it sound like a banner you can ignore. That's not what this is.

What we're really talking about is influence. In aggregate and over time, it feels a lot like control. Companies pay platforms like Google to guide users toward actions that serve the advertiser's interests. They pay because it works — that's why it's a trillion-dollar industry. But the mechanism depends on a gap between what you think the tool is doing (helping you) and what it's actually doing (serving two masters).

In a search engine or social media site, that gap is manageable. You know some results are paid. You've learned to scan past them, even if they do still nudge your subconscious to do things you might otherwise not. In a conversation with an AI — one where you've shared your health concerns, your business problems, your family situation — that gap becomes a violation of trust.

Anthropic's blog post on this nails the core issue: an ad-supported AI assistant talking to someone who can't sleep has two competing objectives. One is to explore what's actually causing the insomnia. The other is to figure out whether the conversation presents "an opportunity to make a transaction." Those objectives sometimes align. When they don't, you'll never know which one won.

Why this matters if you run a business

If you're a founder or operator evaluating AI tools, this isn't an abstract philosophical debate. It's a procurement question.

When you plug AI into your workflows — into how your team researches, communicates, makes decisions — your AI provider's business model shapes what the tool optimizes for. An AI funded by subscription revenue is optimizing to be genuinely useful so you keep paying. An AI funded by advertising revenue has a second optimization target: keeping you engaged and surfacing opportunities for third parties.

These incentives compound. As Anthropic pointed out, "the history of ad-supported products suggests that advertising incentives, once introduced, tend to expand over time." Facebook didn't start as an attention-harvesting machine. It became one because the business model demanded it. The ads got more targeted. The algorithm got more addictive. The product got worse for users and better for advertisers, one invisible increment at a time.

Now imagine that dynamic inside a tool that drafts your emails, analyzes your contracts, and advises you on strategy.

What I'm building and why this is personal

Full disclosure: I'm an Anthropic shareholder. So take my enthusiasm for these ads with the appropriate grain of salt.

But I'm writing about this because it connects directly to what we're building at Force Multiplier AI. One of our core principles is sovereignty — the idea that your data should create intelligence that works exclusively for you, not for the platform you're handing it to. We didn't arrive at that principle because it sounds good in marketing copy. We arrived at it because the alternative is your AI tools gradually becoming less yours and more someone else's.

The Super Bowl ads were about consumer AI. But the same logic applies — maybe even more so — in the enterprise. If the AI reviewing your legal documents has a financial relationship with a legal services provider, how would you know? If the AI recommending your marketing strategy is optimized partly for your engagement rather than effectiveness, what does that cost you over time?

These aren't paranoid hypotheticals. They're the natural endpoint of ad-supported AI in business contexts.

Anthropic's coming-out party

One more thing worth noting: these ads felt like a company stepping out of the shadows. Anthropic has been quietly dominating enterprise API usage and developer mindshare — Claude Code has become the default for a huge swath of the engineering world. Opus 4.6, which dropped last week, is amazing. It's a natural extension of Opus 4.5, which has been my daily driver for months.

But many people outside tech still haven't heard of them. A Super Bowl ad changes that. And the fact that they used their first major consumer moment not to demo features but to stake out a values position tells you something about how they think about the long game.

The coming years will force every AI company to answer the same question these ads posed: who does your AI work for?

The companies that answer "you, unambiguously" will win. The ones that try to serve two masters will discover — as every ad-supported platform eventually does — that users notice when the tool stops being theirs.

If you're choosing AI tools for your business, start asking this question now. Not "which model scores highest on benchmarks," but "what is this company's incentive when my AI gives me a recommendation?"

The answer matters more than any benchmark ever will.

— Matt