Intent is the Interface: Branding in the Age of Agentic Mediation
As interfaces dissolve into temporary forms, the industry is paralyzed by a false binary between human agency and AI automation. The future belongs to brands that master the Legible-Lovable Law, ensuring they are structured for machine parsing while remaining emotionally resonant when rendered across infinite, intent-driven contexts.
The agency panic is a symptom. Intent is the shift. Over the last couple of weeks, I’ve heard the same sentence repeated in different rooms—marketers, digital teams, brand people, platform folks: “We never want AI agents to make decisions for us.”
It usually comes paired with a second anxiety: advertising inside agents. The fear that the agent becomes a broker, a gatekeeper, a quiet manipulator. A future where you lose agency and something else chooses your life in your place. I understand the discomfort. I also think the debate is trapped in a false binary.
Beyond the Binary: The Agency Dial
Because “agency” is not a yes/no switch. It is a dial. And it moves with context.
The mistake is assuming humans want one relationship with choice.
Sometimes you want the agent to compress the world: Reorder the things you already buy. Book the flight. Handle the tedious, low-stakes decisions you don’t want to spend a life on. Sometimes you want the opposite: You want to browse. You want a visually rich, almost tactile exploration, something closer to wandering than purchasing. You want the experience to be the point.
And sometimes the stakes are too high for either extreme: Financial products. A car. A house. Healthcare decisions. Anything that carries consequence. In those contexts, you don’t want “automation.” You want deliberation, structured questioning, traceable reasoning, and the ability to slow the system down.
This is why the simplistic story collapses. The future isn’t “agents decide” versus “humans decide.” The future is: humans express intent, and systems configure around it.
The Wright Brothers Moment: Why Early Messiness Matters
In my manifesto I described it plainly: as the distance between intention and fulfilment collapses, interfaces stop being destinations and start becoming temporary forms, appearing only when needed, in the most natural shape for the moment. That is the part the industry keeps missing.
We are judging an early system as if it were the final one. What we’re seeing today is not “the agent economy.” It’s the early, fragile version of it.
This is the Wright brothers moment. The first bumpy flights didn’t tell you what aviation would become; they told you a new substrate had arrived, and the world would reorganise around it.
OpenClaw is a useful symbol here. It’s a capability signal, not because it’s perfect, but because it’s messy. It shows both the direction and the immaturity at the same time: rapid adoption, local execution, and a chaotic extension ecosystem that has already attracted serious security concerns.
That pattern matters. Capability arrives before governance. Every time.
The Great Convergence: When Interfaces Become Fluid
Convergence changes the meaning of “shopping,” “search,” and “decision.” The reason black-and-white thinking fails is that agents will not evolve alone. They will converge with other trajectories:
- World models that can generate interactive environments, not just answers.
- Ambient surfaces—phones, cars, PCs, and eventually glasses—where the “interface” becomes whatever surface is present.
- On-device / edge compute as a counterforce to pure cloud dependency, accelerating local processing and changing the privacy/surveillance calculus.
When these trajectories converge, the question becomes less “what does the website look like?” and more: What does the human want right now, and what experience shape does that intent summon?
If I tell my system I want an immersive, visually rich browsing experience, it can generate one digitally. If I tell it I want something resolved quickly, it can collapse the decision into a shortlist and execute. If I tell it the stakes are high, it can slow down, interrogate, and make the reasoning legible.
Same human. Different intent. Different interface.
The Shortlist Effect: Where Economic Friction Meets AI
The Shortlist Effect isn’t a theory. It’s already economic friction. I introduced the Shortlist Effect to name a simple mechanism: agents compress infinite choice into a shortlist, and brands survive by being both machine-legible and human-lovable.
We can now see early evidence of the funnel compression. Pew’s analysis of Google usage (March 2025 data) found that when an AI summary appears, users click traditional search results far less—8% versus 15% without a summary.
The economic conflict is no longer subtle. This is what “mediation” looks like when it hits real markets. Not opinion. Not vibes. Pressure.
Advertising isn’t the core problem. Misaligned objectives are. The argument about “ads in agents” is emotionally convenient. It gives the industry a villain. But the real question is colder: What does the agent optimise for, and who controls that objective function?
If the agent is rewarded for platform revenue, it will shape choices in ways that are invisible until they become normal. If the agent is governed to honour user intent under transparent constraints, then delegation can still feel like agency, because it is.
The Legible-Lovable Law: A New Physics for Brands
This is also why the old advertising playbook doesn’t port cleanly. Agents don’t “read ads.” They parse structured truth. That changes where power sits: from persuasion theatre to verification, provenance, and experience fidelity.
When intent becomes the interface, brands are not “visited.” They are rendered. This is why my Legible–Lovable Law is not a framework; it’s closer to physics:
In a world of machines, to be legible is to exist. In the presence of humans, to be lovable is to matter.
- Legibility means structured truth, verifiability, and an API-grade publication of reality.
- Lovability means codified experiential DNA—values, myths, and rituals—so the brand remains coherent when rendered across infinite moments.
Conclusion: The Intent Mandate
People are afraid of losing agency because they assume the system will have one mode: decide everything. That’s not how this plays out. The system will have many modes, because humans do. What will remain stable, across all those modes, is intent.
The uncomfortable part is not that agents will decide. The uncomfortable part is that you will be forced to become explicit about what you want, how much friction you tolerate, and what you value.
The market will reorganise around that. That is not a narrow technology story. It’s a human one. And brands that still think in campaigns and channels are going to find themselves arguing about “ads in agents” while the interface underneath them dissolves.