When Autonomous Plants Meet Non-Autonomous Governance
Autonomy, Authority, and the Next Hidden Risk in Critical Infrastructure
A Companion to “Agent Conflicts: When Multiple AI Systems Disagree in OT”
By Muhammad Ali Khan ICS/ OT Cybersecurity Specialist — AAISM | CISSP | CISA | CISM | CEH | ISO27001 LI | CHFI | CGEIT | CDCP

Autonomy Is Outpacing Authority
Industrial and critical infrastructure environments are entering an operational phase defined less by connectivity and more by autonomous decision-making inside physical systems. Across energy, water, transportation, and manufacturing, OT environments are already deploying self-optimizing control systems, AI-driven maintenance decisions, automated cyber response, and agentic systems that act rather than merely advise.
At the same time, governance structures remain largely unchanged. They still assume committee-based oversight, escalation chains, manual approvals, and after-the-fact accountability. The result is a widening mismatch between systems that operate in milliseconds and governance models that operate in meetings. What once appeared to be an organizational inconvenience has become a material operational, safety, and cybersecurity risk, one that often remains invisible during steady-state operation and only becomes obvious during disruption.
This article builds directly on the concept of Agent Conflicts. Where agent conflicts describe what happens when autonomous systems disagree with one another, this piece examines a deeper and more systemic condition: what happens when autonomous systems outpace the human authority structures meant to govern them.
The Central Conflict — Real-Time Autonomy vs Delayed Governance
Autonomous systems require real-time authority to function as designed, while governance systems are built on delayed consensus. This model worked when automation executed fixed logic, humans retained decision authority, and cybersecurity served an advisory role. It begins to fail when AI systems dynamically adjust setpoints, initiate isolation or failover, suppress or escalate alarms, or trigger maintenance and shutdown logic without waiting for human approval.
Governance, however, still expects pre-approval, cross-functional alignment, documented justification, and retrospective review. When incidents occur, organizations often discover they cannot answer basic but critical questions: who authorized the action, who had the authority to stop it in time, whether the decision was cyber, safety, or production-related, and who is accountable if the system behaved exactly as designed. This is not a tooling failure. It is an authority failure.
The Illusion of “Human-in-the-Loop” Governance
“Human-in-the-loop” has become one of the most reassuring phrases in autonomous system design, particularly in critical infrastructure. In practice, it is often an illusion. In high-speed OT environments, humans are rarely part of the decision loop; they are part of the audit trail. Decisions occur at machine speed, while humans receive notifications after actions have already propagated through physical processes, often with incomplete or delayed context.
Unless governance is explicitly redesigned for autonomy, oversight becomes symbolic rather than functional. The human role quietly shifts from decision-maker to explainer of outcomes. This is dangerous not because humans are excluded, but because organizations continue to believe control still exists when, operationally, it no longer does.
When Policy Cannot Physically Intervene
Most governance frameworks assume reversibility. They assume actions can be rolled back, systems can be disabled, access can be revoked, and decisions can be undone. Physical systems do not behave this way. Once an autonomous action propagates through a process, material flows change, thermal states shift, mechanical stress accumulates, and safety margins are consumed.
At that point, governance cannot undo the action. It can only document it, explain it, and defend it later. This is where governance stops functioning as a control mechanism and becomes a post-event narrative tool.
The risk is not that autonomy exists, but that authority arrives only after physical consequences are already locked in.
Cybersecurity Without Decision Authority Is Toothless
The implications for cybersecurity are unavoidable. In many critical infrastructure organizations, cybersecurity teams can detect anomalies, identify risk, and understand consequences, yet lack the authority to act without permission. When autonomous systems are involved, this creates a dangerous asymmetry: the system can act instantly, while security must escalate.
Policies that lack runtime decision authority do not reduce risk; they introduce delay. That delay creates a new class of vulnerability, one that does not depend on malware, exploits, or technical compromise, but on organizational hesitation. When the response authority cannot match machine speed, detection alone becomes insufficient.
Governance Latency — Naming the Hidden Risk
To understand this failure mode, a new language is required. Governance latency describes the time gap between when an autonomous system can act and when an organization can legitimately approve, stop, or override that action. In agent-rich OT environments, high governance latency equates to high systemic risk.
This risk is often invisible during normal operation, where autonomous actions appear beneficial or neutral. It becomes catastrophic during disturbance, when rapid decisions collide with slow authority. In such environments, attackers do not need to defeat AI systems; they exploit governance latency instead.
The Hidden Danger — Attackers Exploit Governance Lag
Modern attackers increasingly target organizational delay rather than technical weakness. They exploit unclear authority, slow approvals, fear of stopping production, and human hesitation to override “autonomous intelligence.”
A minor manipulation, delayed telemetry, contextual ambiguity, or subtle data distortion can trigger autonomous actions that governance structures are incapable of stopping in time.
In these scenarios, the attack surface is not the AI model or the control logic. It is governance lag. The system behaves as designed, but the organization is structurally unable to intervene fast enough to prevent harm.
Accountability Drift — Who Owns an Autonomous Decision?
When autonomous systems act, accountability often fragments. Engineers built the system, security monitored its behavior, operations ran the process, and management approved deployment. Yet at runtime, no single role clearly owned the decision. This accountability drift is where legal exposure begins, regulatory scrutiny intensifies, and safety justifications collapse, not because systems failed, but because authority was never explicitly assigned.
Responsibility exists everywhere on paper and nowhere in practice. The more autonomous the system becomes, the more dangerous this ambiguity grows.
Industry 5.0 Perspective — Human-Centric Systems Require Human Authority
Industry 5.0 emphasizes human-centric design, resilience over optimization, trustworthy AI, and meaningful human oversight. However, human-centric does not mean human-paced. If humans are expected to retain responsibility, authority must exist at runtime rather than retrospectively. Humans must be able to arbitrate conflicts, not merely audit outcomes, and governance must be embedded into system architecture rather than externalized into documents.
Industry 5.0 is not about reducing autonomy. It is about making autonomy governable.
What Good Looks Like (Principles, Not Frameworks)
Effective governance in autonomous OT environments is defined by principles rather than checklists. Autonomy must be pre-authorized within clear operational boundaries rather than granted as blanket trust. Authority mapping must exist at runtime, clarifying who can override what, instantly. Governance must be designed into system architecture, not layered on afterward through policy. Organizations need explicit kill authority, not just kill switches; someone empowered and obligated to stop the system when necessary.
Finally, post-incident learning must feed back into authority models so governance evolves alongside autonomy.
Closing Thought
The future of critical infrastructure is autonomous. But autonomy without redesigned governance does not remove human responsibility; it obscures it. The most dangerous systems are not those that act on their own, but those that act faster than anyone is allowed to stop them.
Comments
Post a Comment