Home / Marketing / Advertising in AI is a trust experiment marketers can’t ignore

Advertising in AI is a trust experiment marketers can’t ignore

The biggest advertising moment of 2026 didn’t come from the most popular Super Bowl spot or a cinematic brand anthem. It happened on the second Sunday in February, in a self-aware ad that carried a pointed message.

In trying to position itself against a rival, Claude dropped a line that could signal a turning point for the industry: What happens when artificial intelligence platforms start making money from advertisers?

Claude’s maker, Anthropic, built its campaign around a simple promise: “Ads are coming to AI. But not to Claude.” The humor landed because it highlighted an uncomfortable future—personal questions interrupted by sneaker ads, relationship concerns met with dating app promotions, startup advice followed by payday loan offers.

For now, the campaign sets Claude apart from ChatGPT. But the subtext is bigger than competitive positioning: the human–AI relationship is evolving.

The ads referenced OpenAI’s decision to test advertising in ChatGPT—not embedded within responses, as the satire suggested, but displayed below answers for free or entry-level users. These placements are labeled “sponsored” and, according to OpenAI leadership, do not influence outputs.

On the surface, this seems straightforward. But AI isn’t experienced like a billboard or banner—it is experienced as a conversation, as assistance, and increasingly, as companionship. That context raises the stakes considerably.

The Facebook echo

The tension was crystallized in a recent New York Times opinion piece by former OpenAI researcher Zoë Hitzig, titled “OpenAI Is Making the Mistakes Facebook Made. I Quit.”

Hitzig acknowledges a simple economic reality: AI is expensive to operate, and advertising can provide a critical revenue stream. But she also issues a deeper warning—the ethical tremors that arise when monetization begins to leverage patterns of human thought.

We’ve seen this movie before. In its early years, Facebook promised users meaningful control over their data—even the ability to vote on policy changes. Those commitments faded as advertising revenue surged. Financial incentives reshaped the product. The product reshaped behavior. Trust eroded slowly, almost imperceptibly.

That’s why, even if OpenAI insists ads and answers won’t cross streams, the shift itself matters. It has opened the barn door and started leading the horse out by the reins. Once advertising digs in, it tends to find purchase. (No pun intended.)

Trust isn’t just about privacy

Why does this matter so much? Because trust isn’t just a privacy policy—it’s an expectation, the emotional contract users assume when they share something personal with a machine.

In my book Appreciated Branding, I argue that brands earn trust when their intent is clearly aligned with human needs, not when those needs are quietly repurposed for commerce.

The moment a platform turns empathy-seeking inputs into advertising opportunities, the emotional math shifts. Advertising in AI exposes a profound cultural fault line: are these tools designed for honest assistance—or for monetization?

In traditional ecosystems—search engines, social feeds, or television—we have a clear contextual contract. Ads live on the perimeter, and we expect them; we compartmentalize them.

In AI chat, that perimeter disappears. The interface is the conversation—like talking to a therapist who also has a side hustle selling comfort animals.

There’s no sidebar, no separate distraction. The experience is immersive and relational. If users start to feel that their intimate questions are funding someone else’s revenue, the safe space becomes contaminated—and that contamination spreads faster than any clarification.

From an appreciated brand perspective, this is a hard bell to unring. Trust isn’t owned—it’s earned and reinforced repeatedly through signals of alignment and restraint.

Brands that act with empathetic transparency understand that short-term monetization gains can lead to long-term relational losses. Once users suspect ulterior motives, they withdraw—not just behaviorally, but emotionally.

Embedding ads in an interface where users share personal concerns risks transforming AI from trusted helper into commercial shill. Trust leaves the chat, and something far more costly than infrastructure breaks.

The business case for restraint

To be clear, the business pressures are real. AI infrastructure is costly, free tiers need support, and investors expect returns. Advertising is a proven, scalable revenue engine.

But the strategic question marketers should ask is this: what if monetizing attention inside AI erodes the very trust that makes it valuable?

If users start to believe their personal inputs are indirectly fueling commerce, they will adapt: self-censor, withhold context, and turn to paid alternatives or platforms promising neutrality.

In other words, the data well runs dry. Advertising inside AI—whether within the chat or around it—could trigger a subtle but devastating behavioral shift: less honesty, less vulnerability, and less richness in interaction. Ironically, this diminishes the very effectiveness advertisers hope to achieve.

A different path for brands — and the counterargument

This is where marketers need to shift their thinking. If AI platforms can remain spaces where people feel understood without being sold to, brands have a real opportunity to earn trust—by being discoverable through AI visibility rather than paid placement.

AI already favors brands that demonstrate clarity, utility, and problem-solving partnerships while preserving user agency. That’s the principle of appreciated branding at scale: solve first, sell as a byproduct.

Platforms that maintain a clear firewall between assistance and monetization may discover something counterintuitive: preserved trust drives lifetime value. Brands that honor the emotional weight of AI interactions can earn deeper loyalty than those chasing opportunistic impressions.

History offers perspective. I remember a time when consumers said they would rather walk outside naked to get their newspaper than enter their credit card online.

We adapt. Norms evolve. What feels invasive today can become ordinary tomorrow. Clearly labeled, well-regulated advertising below AI responses may eventually be culturally acceptable, with users setting their own boundaries and moving on. Trust could recalibrate rather than collapse.

But the key difference is intimacy. Credit card data is transactional; AI conversations are relational. Once that trust is fractured, it doesn’t rebuild as easily as digital payment habits.

The real experiment

Advertising in AI isn’t inherently wrong—it may even be economically necessary. But it’s a trust experiment, and trust experiments don’t come with unlimited retries.

If AI platforms misstep and users feel their vulnerability is being quietly monetized, the consequences extend far beyond one company’s quarterly earnings. It could reshape expectations for human–technology interaction, shifting the cultural agreement from “this tool is here to help me” to “this tool is here to extract value from me.”

Once that agreement changes, rebuilding it will be far more costly than any data center ever built. For marketers observing this unfold, the lesson is clear: trust isn’t a feature—it’s infrastructure. Selling the ground beneath it is a one-way, irreversible transaction.

Leave a Reply

Your email address will not be published. Required fields are marked *