Engineering Blind Spots_What Control Engineers Don’t Model as “Security Risk”

 By Muhammad Ali Khan ICS/ OT Cybersecurity Specialist — AAISM | CISSP | CISA | CISM | CEH | ISO27001 LI | CHFI | CGEIT | CDCP

Introduction: The Gap Between Reliability and Adversarial Reality

Industrial control systems are engineered to survive equipment failure, environmental stress, and operator error. Control engineers excel at modeling process deviations, fault tolerance, redundancy, and safety margins. What they do not model well — often at all — is intentional, adaptive, and adversarial behavior.

This is not negligence. It is a consequence of how control engineering evolved. ICS environments were designed under assumptions of trust, physical isolation, deterministic behavior, and benign failure modes. Cybersecurity violates every one of those assumptions.

As a result, many of the most damaging OT cyber incidents did not exploit unknown vulnerabilities. They exploited engineering blind spots — conditions engineers never classified as “security risk” because they fall outside traditional control and safety models.

This article examines those blind spots, why they persist, and how they quietly undermine industrial resilience.

1. Determinism Assumptions in a Non-Deterministic Threat World

Control systems are built on deterministic logic:

  • If input A exceeds threshold B, output C occurs.
  • If valve feedback does not match command, raise alarm.
  • If PLC scan time exceeds limit, fault is triggered.

Cyber adversaries do not behave deterministically.

Attackers:

  • Introduce timing jitter below alarm thresholds
  • Alternate between valid and invalid states to avoid fault detection
  • Exploit edge-case logic paths never triggered in normal operation

Most engineering models assume faults are random or physical. Cyber actions are intentional, adaptive, and optimized for stealth.

Blind spot: Engineers model what can break, not what can be manipulated.

This is why process anomalies caused by attackers are often misdiagnosed as:

  • Instrument drift
  • Network congestion
  • Operator error
  • Poor tuning

By the time cyber is suspected, the attacker has already learned the system’s tolerances.

2. Safety ≠ Security (But Engineers Often Treat Them as Overlapping)

In many plants, safety is implicitly treated as a security control:

  • “The SIS will trip if something goes wrong”
  • “Operators will notice abnormal behavior”
  • “Physical interlocks prevent dangerous states”

This assumption fails in cyber scenarios.

Attackers do not need to violate safety limits. They can:

  • Keep the process just inside safe boundaries
  • Gradually degrade equipment without triggering trips
  • Manipulate safety perception, not safety reality

In incidents like Triton/Trisis, the attack explicitly targeted the safety system itself, proving that safety logic is not inherently trustworthy under cyber conditions.

Blind spot: Engineers model unsafe states, not maliciously safe-looking states.

3. Process Drift Is Modeled as Aging, Not Manipulation

Control engineers are trained to expect drift:

  • Sensor aging
  • Fouling
  • Thermal effects
  • Mechanical wear

Drift is treated as a maintenance problem, not a threat signal.

Cyber attackers exploit this assumption by:

  • Introducing micro-adjustments over weeks or months
  • Manipulating calibration values instead of raw measurements
  • Biasing feedback loops without changing setpoints

Because drift is expected, it is tolerated. Because it is tolerated, it is rarely investigated.

Blind spot: No distinction between natural drift and adversarial drift in engineering models.

Most plants do not baseline rate-of-change behavior under adversarial conditions, making long-term manipulation effectively invisible.

4. Trust in Field Devices and Engineering Workstations

Many control architectures implicitly trust:

  • PLC logic once commissioned
  • Field devices once calibrated
  • Engineering workstations once authenticated

This trust is rarely re-validated.

From an engineering perspective:

  • Logic is “known good”
  • Configuration changes are “intentional”
  • Device behavior is “as designed.”

From a security perspective:

  • Logic can be modified without affecting function
  • Firmware can be altered without obvious symptoms
  • Configuration changes can be subtle and malicious

Blind spot: Engineers model component failure, not component deception.

This is why logic integrity monitoring, firmware validation, and configuration baselining are often missing — or treated as compliance tasks rather than operational necessities.

5. Network Behavior Is Treated as a Transport, Not an Attack Surface

Control engineers think in terms of:

  • Latency
  • Bandwidth
  • Redundancy
  • Availability

They rarely think in terms of:

  • Command injection
  • Replay attacks
  • Protocol abuse
  • Unauthorized state changes

ICS protocols were designed for reliability, not hostility. Many lack authentication, encryption, or session integrity.

Blind spot: Engineers assume “if the message arrived, it is legitimate.”

This is why:

  • Valid-looking packets can cause invalid states
  • Malicious commands are indistinguishable from legitimate ones
  • Network monitoring focused only on uptime misses active attacks

6. Human Operators Are Modeled as Safety Assets, Not Attack Targets

Engineering models treat operators as:

  • Decision-makers
  • Safety backstops
  • Alarm responders

Cyber attackers treat operators as:

  • Confusion points
  • Delay mechanisms
  • Trust anchors to exploit

By subtly altering system behavior, attackers can:

  • Normalize abnormal conditions
  • Train operators to ignore early indicators
  • Create alarm fatigue intentionally

Blind spot: Engineers do not model cognitive manipulation as a system risk.

This is why many attacks succeed without technical sophistication — because the human layer is unprotected by design.

7. Commissioning Is Treated as the End of Risk Introduction

Once a system is commissioned:

  • Logic is frozen
  • Architecture is considered stable
  • Risk is assumed to decrease over time

In reality, cyber risk increases post-commissioning due to:

  • Remote access additions
  • Vendor support connections
  • Patchwork upgrades
  • Temporary engineering changes that become permanent

Blind spot: Engineers model risk at design time, not as a living system property.

This creates environments where undocumented pathways exist — unknown to engineering, invisible to security, and exploitable by attackers.

8. Vendor Behavior Is Assumed Benign and Competent

Engineering culture assumes:

  • Vendors follow best practices
  • Remote access is controlled
  • Updates are safe
  • Security is “handled by the supplier”

In reality:

  • Vendors reuse credentials
  • Support tunnels persist indefinitely
  • Security controls are optional add-ons
  • Responsibility is contractually ambiguous

Blind spot: Engineers model vendor failure, not vendor compromise.

This is one of the most exploited attack paths in OT environments today.

9. Incident Response Is Modeled as a Technical Exercise

When something goes wrong, engineering response focuses on:

  • Restoring operation
  • Stabilizing the process
  • Avoiding downtime

Cyber incidents require:

  • Evidence preservation
  • Controlled isolation
  • Delayed recovery
  • Cross-disciplinary coordination

Blind spot: Engineers optimize for speed of recovery, not integrity of understanding.

This often destroys forensic evidence and allows attackers to persist undetected.

10. The Core Issue: Security Is Not a Variable in Control Theory

At its core, the problem is this:

Control engineering has no native variable for intent.

Physics does not lie. Adversaries do.

Until security is treated as:

  • A disturbance source
  • A manipulation vector
  • A dynamic adversary

…it will remain invisible to traditional engineering models.

Conclusion: From Blind Spots to Resilience

These blind spots persist not because engineers are careless, but because the threat model changed faster than the discipline.

Closing this gap does not mean turning control engineers into security analysts. It means:

  • Expanding engineering models to include adversarial behavior
  • Treating anomalies as questions, not nuisances
  • Designing systems that assume deception, not trust

Cybersecurity in OT is not about adding more tools.
 It is about correcting the assumptions that engineers were never trained to question.

Until then, attackers will continue to operate inside the tolerances of our own designs.


Comments

Popular posts from this blog

Agentic AI as a New Failure Mode in ICS/OT

Agentic AI vs ICS & OT Cybersecurity

Are You Ready for the 2026 OT Cyber Compliance Wave?