When Execution Is No Longer Human
Formal authority, execution power, and the governance problem in the age of AI
A classic problem in organizations is the relationship between formal authority and execution power.
These two are often not the same. Someone may hold formal authority, while someone else determines what really happens on the ground. Much of governance is about managing that gap.
But this framework assumes that execution is performed by humans.
Once execution is increasingly mediated by AI systems and agents, the logic becomes less clear.
The discussion here is about AI as a form of automation rather than autonomy. But it is not automation in the traditional sense of deterministic scripts, fixed workflows, or rule-bound software pipelines. The systems that matter here are probabilistic, adaptive, and capable of handling open-ended tasks under uncertainty.
The working assumption is narrower than a full theory of AGI. Current agents are still systems designed, deployed, and governed by humans, not independent subjects with their own consciousness, legitimacy, or ends. Even so, this kind of stochastic automation is already importantly different from classical automation, and that is enough to create a new governance problem.
The old model: formal authority and execution power
In human organizations, governance usually has a recognizable foundation. There is some notion of formal authority: hierarchy, legal mandate, organizational role, contractual accountability. There is also execution power: the actual ability to make things happen.
These two often diverge. A manager may formally own a process while a senior engineer, regional operator, or staff researcher controls the real bottleneck. But even when execution power drifts away from formal authority, the organization still treats formal authority as the normative foundation. If things go wrong, the question is still who was supposed to be in charge.
That is why human governance often revolves around a familiar tension:
- Who has the right to decide
- Who actually controls execution
And when those drift too far apart, organizations intervene. They may recentralize power, rotate operators, create extra oversight, or formalize the authority of those who are already indispensable.
This is the classic dance between de jure authority and de facto power.
Why AI changes the picture
AI systems complicate this framework in a fundamental way.
An AI agent can exercise execution capacity without possessing any intrinsic source of legitimacy. It has no title, no legal identity, no genuine accountability, and no natural place in an organizational hierarchy. Yet it may draft plans, write code, prioritize information, communicate with stakeholders, invoke tools, and increasingly coordinate work.
This creates an unusual asymmetry.
Humans have legitimacy and agency.
Code has determinism but no agency.
Agents have a kind of execution agency without legitimacy.
That is a very different governance object.
When a human acts, we can ask whether they had the right to do so.
When code runs, we can ask whether it followed specification.
When an agent acts, neither question is fully sufficient.
The agent may not have formal authority. It may not even be deterministic enough to be treated like ordinary software. And yet it may shape outcomes in ways that matter.
That is why AI governance inside organizations is not just a scaled-up version of software governance. It is a different problem.
Humans, code, and agents are governed differently
One way to see the shift is to compare the logic of governance across three objects: humans, code, and agents.
| Governance target | Humans | Code | Agents |
|---|---|---|---|
| Core property | Intentional action | Deterministic execution | Probabilistic action |
| Main control mechanism | Incentives and authority | Specification and verification | Policy, architecture, and monitoring |
| Accountability | Relatively clear | Traceable through versioning and deployment | Often ambiguous |
| Failure mode | Misalignment or disobedience | Bugs or incorrect implementation | Misinterpretation, drift, or unexpected behavior |
| Governance question | Who is responsible | Did it follow spec | Who shaped its behavior and who owns the outcome |
Humans are governed through incentives, norms, and hierarchy. Code is governed through specifications, tests, permissions, and deployment control. Agents are governed through prompts, policies, tool access, memory, routing, approval loops, and evaluation.
The key point is that agents differ from both humans and code because their behavior is governed less by role or specification alone, and more by architecture.
The key shift: from governing actors to governing architectures
In a traditional organization, a manager allocates work to people. In an AI-mediated organization, work increasingly flows through systems: shared agents, tool-using copilots, automated research assistants, code generation pipelines, and decision support workflows.
At that point, the central governance question is no longer only who does the work. It becomes who designs the system through which work gets done.
This is a different kind of power. It is not merely formal authority and not merely operational execution. It is the power to define the architecture of execution itself.
- Who sets the system prompt
- Who controls tool access
- Who decides what memory is retained
- Who defines the evaluation criteria
- Who sets escalation thresholds
- Who can override the agent
These are not just technical settings. They are governance levers.
This is why AI-era organizations will increasingly have to confront a third category of power:
- Formal authority
- Operational power
- Architectural power
Architectural power is control over the systems, constraints, and evaluation loops that steer execution.
Formal authority belongs to the world of legitimacy: who has the recognized right to decide. Operational power is the practical ability to make things happen. Architectural power is different again. It is the ability to shape the agentic systems through which execution now flows. It is neither just authority over people nor just specification of deterministic code. It is control over the architecture that steers probabilistic execution.
Architectural power does not inherently require formal authority. But once it becomes central to organizational execution, formal authority will usually try to absorb it, constrain it, and legitimize it.
The scope condition remains the same as above: current agents are still treated as systems designed, deployed, and governed by humans, not independent moral actors. The question in view is therefore not agent sovereignty, but human control over agent-mediated execution.
flowchart LR
subgraph OLD[Old model]
A1[Formal authority] --> A2[Human execution]
A2 --> A3[Operational power]
end
subgraph NEW[AI-mediated model]
B1[Formal authority] --> B4[Governance challenge]
B2[Architectural power] --> B3[AI-mediated execution]
B3 --> B5[Operational outcomes]
B5 --> B4
B2 --> B4
end
The shift is not only that execution moves into systems. It is also that the worker itself is increasingly designed. Organizations have always designed processes and incentives around workers. Now, for a growing share of digital execution, they increasingly design the executor too. Governance can no longer be only about supervising labor. It must also address how the executor is specified.
Shared agents make the problem concrete
If I use a private research assistant, that is mostly a productivity story. Governance is simple because the principal is obvious.
But the moment an agent becomes shared infrastructure, the problem changes.
Consider a shared research agent embedded in an early-stage drug discovery program. It does not merely summarize papers. It helps decide which hypotheses merit another experimental cycle, which candidate compounds advance to the next gate, which negative results count as noise rather than warning, and which programs should lose funding before the next review.
Now the governance problem becomes much sharper. Research leads want scientific novelty and a higher chance of breakthrough. Platform teams want experimental throughput and better use of lab capacity. Finance wants earlier termination of low-probability projects. Quality and compliance teams want stronger evidence standards, traceability, and conservatism around ambiguous results.
All of them draw on the same agent. But “good research execution” means something different to each. Whether the agent favors novelty, caution, cost discipline, evidentiary rigor, or pipeline velocity depends on architectural choices: the system prompt, retrieval policy, memory design, escalation rules, tool permissions, and evaluation criteria.
Whoever controls those settings does not merely operate the agent. They shape which experiments get run next, which programs receive additional resources, and which lines of inquiry are abandoned. In effect, they shape the portfolio itself. That is architectural power — and it may not belong to whoever sits highest on the org chart.
With code, conflicts often reduce to specification and ownership: who wrote it, who reviewed it, who deployed it. The object being governed is stable. Code executes rules.
Agents are different. Their outputs are probabilistic, context-sensitive, and interpretation-heavy. They interpret goals under constraints and then act through operational pathways.
That means a shared agent is not just a technical asset. It is a political object inside the organization. Wherever interpretation exists, governance becomes necessary.
One useful way to frame this is as a multi-principal delegation problem. A shared agent implicitly serves several principals at once, each with a different priority ordering: novelty, evidentiary rigor, cost discipline, compliance, throughput, or reliability. The governance challenge is not only who uses the agent, but whose priorities are formally represented in its operating constitution.
Why credit and responsibility become harder
This also explains why credit and blame become so much harder to assign in AI-mediated work.
Suppose a shared agent helps a company terminate weak programs earlier while concentrating resources on a candidate that later shows strong results. Who gets credit?
- The person who gave the original instruction
- The person who provided the context
- The person who designed the workflow
- The platform owner who chose the memory policy
- The evaluator who accepted the output
- The team lead who decided to rely on the agent
And if the result is poor, the same ambiguity applies to blame.
This is not just an accounting problem. It is a structural one. In traditional organizations, contribution is often attached to identifiable human effort or managerial responsibility. In AI-mediated organizations, contribution becomes distributed across prompts, policies, system design, human feedback, and approval decisions.
As a result, organizations may increasingly reward not only direct work, but also control over execution architecture.
That is one reason AI governance will become an issue very soon. The moment agents move from personal assistants to shared execution infrastructure, responsibility can no longer be assumed to map cleanly onto people or roles.
Governing architectural power
If architectural power is real, it also needs governance.
That does not require a complete constitutional theory of organizations. But it does suggest a few practical design principles.
First, shared agents should not be treated as ordinary internal tools. They should be treated as delegated systems whose priorities, permissions, and escalation rules must be explicitly defined.
Second, some degree of separation of powers becomes important. The same group should not always control prompt design, tool permissions, memory policy, evaluation criteria, and final approval. When too many of these levers sit in one place, hidden authority can accumulate without formal recognition.
Third, the architecture of execution should be auditable. System prompts, retained memory, tool access, evaluation rules, and override policies should be inspectable enough that organizations can reconstruct how a decision pathway was shaped.
Fourth, shared agents may need something like an explicit priority ordering or operating constitution. In some environments, speed may dominate. In others, safety or compliance may take precedence. The important point is that this ranking should not remain buried inside platform defaults or informal prompt habits.
In that sense, governing architectural power is partly a problem of transparency, partly a problem of institutional design, and partly a problem of making delegation legible before conflict arrives.
Implications
Managers will not disappear, but their role shifts. They increasingly govern a hybrid of humans and agents: deciding what belongs to whom, setting approval boundaries, designing escalation rules, and preventing hidden power from accumulating in technical bottlenecks. A company may think it is governed by its org chart while in practice key leverage sits with the owners of agent platforms, memory systems, evaluation loops, and tool permissions.
The deeper point is about legitimacy. Human governance begins from it. AI systems can have permissions without it. As more work flows through socio-technical systems shaped by architecture, policy, data, and shared control, the old language of authority fits less and less well. That mismatch is not a minor implementation detail. It is a new governance layer.
Conclusion
Formal authority and execution power are no longer the whole story. Once execution flows through AI systems, a third form of power emerges: architectural power, control over the systems through which work gets done.
This is not a distant problem. It is here now.
The organizations that see it clearly will not just deploy better agents. They will govern them better.
Further notes
One-person companies
The fascination with the “one-person company” captures something real: agentic systems may compress many forms of digital execution into much smaller units than before. But that is only the first visible effect. The deeper story is broader organizational reorganization. Large organizations may become even more formidable if they can combine scale, capital, institutional memory, and governance around agent-mediated execution.
AGI and scope
This essay does not try to settle the boundary between advanced automation and AGI. That boundary does not make the issue less urgent. Even without autonomy in any stronger philosophical sense, systems in the current paradigm may mediate or replace a very large share of digital work. If that happens, the central question is not whether these systems count as persons. It is who governs the architecture through which so much execution will flow.
Takeaways
- Human governance has traditionally been organized around formal authority and execution power. AI-mediated governance adds the further problem of architecture.
- Agents are not just people and not just code. They are a different governance object.
- Once execution flows through agentic systems, architectural power becomes a distinct and consequential form of power.
- Shared agents should be understood as multi-principal delegation systems, not just productivity tools.
- The age of agents is not just a story of replacement. It is a story of organizational reorganization.
- The key governance question is no longer only who decides and who executes, but who controls the architecture through which execution happens.
References
[1] Philippe Aghion and Jean Tirole. Formal and Real Authority in Organizations. Journal of Political Economy, 105(1), 1997. [Link]
[2] Michael Lipsky. Street-Level Bureaucracy: Dilemmas of the Individual in Public Services. Russell Sage Foundation, 1980/2010. [Link]
[3] Katherine C. Kellogg, Melissa A. Valentine, and Angèle Christin. Algorithms at Work: The New Contested Terrain of Control. Academy of Management Annals, 14(1), 2020. [Link]
These three references map onto the essay’s three-step argument. Aghion and Tirole provide the basic distinction between formal authority and real authority, which is the starting point of the piece. Lipsky helps explain why execution power often accumulates at the point of implementation, because the people closest to the work inevitably exercise discretion. Kellogg, Valentine, and Christin extend that classic organizational problem into the age of algorithms by showing how systems can reorganize control itself. The essay builds on that progression: from the gap between authority and execution, to discretion embedded in execution, to the claim that AI introduces a further layer of architectural power.
Citation
If this essay is useful in your work, you can cite the blog post.
BibTeX
@misc{dong2026executionnolongerhuman,
title = {When Execution Is No Longer Human},
author = {Dong, Hanze},
year = {2026},
month = {March},
url = {https://hendrydong.github.io/blogs/pages/agentgov.html},
note = {Blog post on Hanze Dong's Blogs},
urldate = {2026-03-13}
}