The Agentic Erosion of Corporate Fiction

For decades, corporate power has hidden in the gaps of manual coordination and the intentional fiction of the org chart. Agentic systems are now dismantling these information silos, forcing organizations to finally align their internal structures with the radical transparency required for the AI era.

Here’s something that took me embarrassingly long to learn: the org chart is fiction. Useful fiction, sometimes, like a subway map that distorts geography to clarify routes. But fiction.

I spent my first few years in large organizations believing that if I could just understand the process — really understand it — I’d know how things worked. I read the RACI matrices. I studied the governance decks. I mapped the workflows. And then I’d watch a product launch actually happen and realize none of it described the thing I was seeing. The real work moved through back channels, pre-meetings, someone’s relationship with someone else’s boss, and a deeply unofficial understanding of which teams would actually do what they said they’d do.

The process was the costume. The politics was the body.

Power lives in the gaps

This isn’t a revelation. Anyone who’s spent five years inside a big company knows it. But it’s worth being specific about where politics actually lives, because that’s what makes the current AI moment interesting.

Politics lives in handoffs. In who defines the blocker. In who gets consulted early versus informed late. In the person who can delay a decision for three weeks without ever appearing to obstruct anything — they’re just “raising concerns,” “ensuring alignment,” “looping in stakeholders.” Politics lives in the fact that nobody can quite remember why a decision was made last quarter, which means whoever reconstructs the narrative gets to reshape it. It lives in the six people on a cross-functional team who technically have the same information but actually don’t, because context degrades every time it passes through another human.

Coase figured out in 1937 that firms exist because markets have transaction costs. What he didn’t dwell on is that firms also have transaction costs — enormous ones — and that whole careers can be built inside them. Not by reducing them. By managing them. Sometimes by quietly perpetuating them.

I once watched a senior program manager spend an entire quarter building a “status reconciliation process” for a transformation program. Meetings to align on what other meetings had decided. Decks summarizing decks. A tracker tracking other trackers. It looked like diligence. It was actually a power play: by becoming the person who synthesized the narrative, he became the person who controlled it. Nobody could go around him because nobody else had the full picture. He was the full picture.

That’s not unusual. That’s Tuesday.

Why every previous tech wave failed to fix this

ERP was supposed to give us visibility. It gave us data entry. Lean was supposed to give us flow. It gave us ceremonies. Dashboards were supposed to give us transparency. They gave us screenshots in slide decks. Digital transformation — that magnificent blank check of a phrase — was supposed to make the enterprise legible. Mostly it made the enterprise more instrumented, which is not the same thing.

Here’s why: all of those systems could record coordination. They could timestamp a decision, map a process, visualize a bottleneck, store a document. What they couldn’t do was carry coordination. They couldn’t hold context across handoffs. They couldn’t reconcile competing priorities without a human referee. They couldn’t tell the difference between a delay caused by genuine complexity and one caused by someone protecting their turf.

They were cameras. The organization still needed directors, actors, and a script.

So the politics survived. It just learned to perform in front of the new cameras. People got good at filling in the workflow tool in ways that looked compliant while the actual work continued to move through informal channels. James Scott wrote about this in a different context — how legibility projects imposed by states tend to produce a simplified official reality alongside a persistent informal one. Corporations do the same thing. The SAP system says one thing. The WhatsApp group says another.

What’s actually different now

Agentic AI systems don’t just observe the workflow. They start to do parts of it — querying across silos, holding context over time, routing tasks, flagging exceptions, triggering actions. Which means they begin to replace coordination that was really just logistics: the meetings that exist because nobody trusts the system to carry the state of play, the updates that substitute for memory, the reconciliation of things that should have been reconciled by design.

I want to be careful here, because the hype is already running ahead of the reality. These systems can take over more of the coordination layer than previous software could. But they can’t handle genuine ambiguity well, and they shouldn’t pretend to. When two departments disagree about a real tradeoff — speed to market versus regulatory caution, say — no agent resolves that in any meaningful human sense.

But a lot of what passes for “judgment” in organizations isn’t actually judgment. It’s someone carrying context because the system can’t. It’s someone remembering what was decided in March because nobody wrote it down. It’s someone manually reconciling things that should have matched by design.

The program manager I mentioned — the one who spent a quarter becoming the only person with the full picture? When the full picture is queryable, when decisions have provenance, when context persists without someone manually maintaining it, that move stops working. Not because power disappears. Because one of its oldest hiding places does.

Most corporate AI talk stays at the level of productivity. The deeper shift is political: agentic systems erode the structural advantage of people whose influence depended not on what they knew, but on the fact that nobody else could access it. I find that mostly good, and partly terrifying.

Protocol is not the absence of politics

Here’s where I part ways with the techno-optimist version of this story.

Someone has to decide what the ontology looks like. Someone sets the escalation thresholds, defines what counts as an exception, determines which objectives the system optimizes for. Those aren’t technical decisions. They’re political decisions expressed in technical form. The person who designs the workflow rules is making governance choices. The person who defines the permissions model is drawing power boundaries.

Protocol is not an escape from politics. It’s politics forced into a form where it can be read. The gatekeeper in the hallway operated on discretion and deniability. The gatekeeper in the code operates on rules that can, in principle, be inspected, challenged, and changed. That’s not utopia. But it’s better than pretending the hallway version was meritocratic.

What this actually requires

The harder question is who designs the new systems. The people who currently hold power. Why would they encode anything other than their existing advantages?

I don’t have a clean answer. The optimistic case is that legibility creates pressure — once rules are visible, they can be contested in ways informal norms can’t. The pessimistic case is that the new gatekeepers will be the people who understand the architecture, and their power will be harder to challenge because it presents itself as objective. Both are probably true at once.

Which means governance isn’t a feature to be added later. It’s the whole game. And the questions are specific. Which parts of your coordination are genuine judgment and which are compensation for systems that couldn’t carry context? If you can’t tell the difference, you’ll automate the wrong things — give an AI the job of reconciling three trackers instead of asking why there are three trackers. The worst outcome isn’t that automation fails. It’s that it succeeds at scaling something that shouldn’t exist.

Permissions, escalation rules, decision rights, exception handling — these aren’t implementation details for an architecture team. They’re the new power structure of the firm. Where must human judgment remain sovereign? Not “humans in the loop” as a slogan — as enforceable commitments. With what authority. With what right to override the system when the system is confidently wrong.

Most executives have no training in this and no incentive to prioritize it until something breaks.

I’ve spent enough time in large organizations to know that the unofficial motto of most of them is: “We know this doesn’t make sense, but it’s how things work.” There’s a weary pragmatism in that. Sometimes it’s even wise. Not every inefficiency is waste; some of it is cushioning that keeps a contradictory system from snapping under its own weight.

But a lot of it is just feudalism that learned to dress business casual. People spending their energy navigating opacity instead of doing work. Talented people managing upward instead of outward. Whole layers of the organization dedicated to the care and feeding of coordination failures nobody has the incentive to fix.

Agentic systems won’t fix all of this. They’ll fix some of it and create new problems we can’t fully see yet. But they do something previous tech waves didn’t: they go after the coordination layer itself, not just the tools around it. And that means the political economy of the firm — who holds power, how, and why — is genuinely up for renegotiation.

The question was never really whether organizations would adopt AI. It was whether they meant what they said about transparency, accountability, and meritocracy once the fog stopped protecting the gap between the story and the structure.

That test is here now.

The fog is clearing.

A lot of organizations are about to find out what they actually are.