The Physics of Work — Human Cognitive Limits, AI Limitations, and the Delegation Errors Hiding in Plain Sight
What this means for your organization
Most AI governance catches the reckless error — the deployment that should never have happened. Almost no one catches the timid error — the human work that should have been delegated months ago but was not, because habit and organizational inertia kept it in place. The Physics of Work is a diagnostic layer that tests every human assignment against the actual constraints of the platform performing it. When no constraint requires a human, the Physics flags what everyone else ignores: the massive, quiet waste of human hours on work that machines should own.
The error no one is looking for
The Language of Work — Seampoint’s process ontology for how organizations allocate cognitive labor — has four components. The Vocabulary gives you platforms and verbs. The Grammar (the Capability Matrix) tells you which assignments are structurally valid. The Compiler runs a two-stage validation. And the Physics, the second stage of that Compiler, answers the question that the Grammar cannot: is this assignment sustainable?
The Grammar catches Errors of Commission. These are reckless delegation — assigning work to a platform that structurally cannot do it. A prediction machine assigned to DECIDE. A logic machine assigned to INTERPRET. These are violations of structural rules, context-free and absolute. Everyone can see them once they are named.
The Physics catches something harder. Errors of Omission are timid non-delegation — keeping work on a platform that cannot sustain it at the required level, or refusing to delegate when no governance constraint requires human involvement. These errors are invisible because they look like normal work. A compliance analyst spending thirty hours a month formatting regulatory reports from structured data. A logistics coordinator manually routing shipments that follow deterministic rules. A quality inspector monitoring a production line screen for eight hours straight. None of these assignments violate the Grammar. All of them violate the Physics.
This is where the AI Efficiency Dividend lives — not in flashy automation, but in the quiet liberation of human capacity from coordination overhead and work about work that accumulated over decades of organizational accretion.
Every platform has a datasheet
Engineers do not deploy hardware without reading its datasheet — the document that specifies operating limits, failure modes, and the conditions under which the component stops performing to spec. The Physics of Work treats every platform the same way. Each platform has architectural constraints that are features of its design, not deficits in its training. You cannot train them away any more than you can train a bridge to hold more weight than its materials allow.
The human datasheet
Human cognition is extraordinary. It is also constrained in ways that organizations systematically refuse to acknowledge.
Vigilance decrement. Human attention on repetitive monitoring tasks degrades after approximately twenty minutes. This is not a discipline problem. It is neurology. A radiologist reviewing mammograms catches significantly fewer anomalies in the second hour than the first. A security analyst watching network traffic alerts at hour six is providing the illusion of monitoring, not the reality. The named violation is the Vigilance Fallacy — and it is one of the most widespread unacknowledged failures in enterprise operations.
Buffer overflow. Miller’s Law establishes that human working memory holds roughly seven items, plus or minus two. Ask a supply chain manager to simultaneously track nineteen variables across a logistics network and you have exceeded the platform’s memory architecture. The information does not get processed more slowly. It gets dropped.
Decision fatigue. Cognitive resources deplete with sustained decision-making. Research on judicial sentencing patterns shows that parole approval rates drop from sixty-five percent to near zero over the course of a morning session, then reset after a meal break. A loan officer reviewing her two-hundredth application at four in the afternoon is not exercising the same judgment she brought to the first application at eight in the morning. After roughly a hundred consequential decisions, the human platform is running on fumes.
Circadian degradation. Decision quality drops by as much as sixty-five percent from morning to afternoon after four or more hours of sustained cognitive effort. This is not an argument for better scheduling. It is a fact about human biology that most organizations build their workflows around as if it did not exist.
Scale and precision ceilings. Humans cannot process a thousand items simultaneously. They maintain a baseline error rate of two to five percent on procedural work, even with training and motivation, in contexts where less than one percent is required. These are not performance gaps to be closed with better management. They are the operating specifications of the platform.
The point is not that humans are deficient. The point is that humans are magnificent at some things and architecturally wrong for others — and the Physics insists on telling the truth about both.
The prediction machine datasheet
Prediction machines — large language models, ML classifiers, recommendation engines — are probabilistic systems. Their constraints follow directly from that architecture.
No truth, only likelihood. The same input can produce different outputs. This is not a bug. It is the mathematical reality of probabilistic generation. The named violation is the Accountability Gap: when an organization treats a probability as a decision, there is nobody standing behind the outcome.
Hallucination is a feature. The mechanism that enables a language model to produce creative, useful text is the same mechanism that produces fabrication. You cannot remove hallucination without removing the capability. Phantom Authority is the named error — the moment an organization treats generated content as authoritative without human verification.
Distribution dependence. Prediction machines are trained on the past and interpolate brilliantly within their training distribution. They have no mechanism for recognizing when the present has departed from historical patterns. Black swan events are invisible to them by definition. A credit risk model trained on a decade of stable housing prices will produce confident, useless output when the market inverts.
Session amnesia and semantic drift. These systems do not learn across sessions. Meaning wanders over long sequences. They exist in information, not physics. Every constraint follows from the same root: probabilistic systems are powerful interpolators, not truth-generators.
The logic machine and matter machine datasheets
Logic machines — rules engines, ERP systems, workflow automation — have a different constraint profile entirely. They do not degrade. They either work perfectly or fail completely. The named error is the Brittleness Trap: ambiguous input does not produce a less accurate result. It produces garbage or a crash. There is no graceful degradation curve.
Matter machines — robots, autonomous vehicles, industrial actuators — are bound by physics. They must be co-located with the work. Environmental variability affects performance. And physical actions are irreversible in a way that digital operations are not. You cannot undo a robotic incision with Ctrl-Z.
The four authority constraints
The Physics does not simply catalog platform limitations and call it a day. It applies those limitations through a structured test. For every human work assignment that passes the Grammar, the Physics evaluates four Authority Constraints — the conditions under which human involvement is structurally required.
Consequence. How severe is the worst-case outcome if this assignment goes wrong? A misformatted internal status report has low consequence. A misdiagnosed tumor has catastrophic consequence. When consequence is high, the human platform’s unique capacity for accountability binds.
Judgment. Does this task require weighing incommensurable values — trade-offs where reasonable people could disagree, where no algorithm can specify the right answer? Designing a new pharmaceutical compound involves judgment at every stage. Generating a monthly revenue summary from an accounting system does not. When judgment binds, prediction machines can amplify but cannot replace.
Connection. Does this task require genuine human relationship or trust? A physician delivering a terminal diagnosis. A manager conducting a performance review that requires reading the room, adjusting tone, absorbing emotion. When connection binds, automation is not just inefficient — it is destructive.
Reliability. What level of verification does the output require, and which platform can sustain that level? For procedural, repetitive, high-volume work, machines are more reliable than humans. This is not opinion. It is the human datasheet: two to five percent error rate on procedural tasks, vigilance decrement after twenty minutes, decision quality degradation after four hours.
When none of these constraints bind — when consequence is low, judgment is minimal, connection is absent, and reliability favors machines — there is no structural reason to keep a human in the loop. The Physics flags this as an Error of Omission.
What the Physics finds in practice
Walk into any large organization and apply the four Authority Constraints to the work humans are currently doing. The results are consistent and uncomfortable.
A healthcare system assigns registered nurses to manually transcribe physician orders from one electronic system into another, reconciling formatting differences between platforms. Consequence: low, because a pharmacist independently verifies every order before dispensing. Judgment: absent — the task is transcription, not clinical reasoning. Connection: absent. Reliability: the machine is better — nurses performing this task at volume maintain a three percent error rate that a logic machine would reduce to zero. No constraint binds. This is AI Handoff Work. Dozens of nursing hours per week are consumed by coordination overhead that has nothing to do with patient care.
A manufacturing company assigns quality inspectors to visually monitor a production line for defects across eight-hour shifts. The Grammar validates the assignment: humans can MONITOR. The Physics rejects it. Vigilance decrement means the inspector’s detection rate degrades within the first half hour and continues to fall. By hour four, the inspector is catching fewer defects than a properly calibrated computer vision system would catch continuously. No Authority Constraint requires a human here. The defects are physical and pattern-based, not judgment-dependent. This is work that should be liberated from humans, not because humans are expendable, but because they are the wrong platform for this specific verb at this specific volume.
Now consider the counter-example. A commercial bank assigns senior relationship managers to formulate lending proposals for major corporate clients. Does the Physics flag this? It tests the constraints. Consequence: high — a poorly structured deal can cost tens of millions. Judgment: the task requires weighing the client’s strategic trajectory, competitive dynamics, and risk appetite — values that cannot be reduced to a scoring model. Connection: the proposal is part of a relationship that has been built over years. The judgment and connection constraints bind. This is AI Amplified Work at best — a prediction machine can draft preliminary term sheets, run scenario analyses, gather market comparables, but the human formulates the proposal that goes to the client. Role distillation, not role elimination.
The efficiency dividend hiding in your org chart
The Physics reveals a pattern that repeats across every industry we examine. Somewhere between thirty and fifty percent of the cognitive work currently assigned to humans in a typical enterprise involves tasks where no Authority Constraint binds. This is not a theoretical estimate. It is what emerges when you apply the four-constraint test to actual workflows.
This work is not unimportant. It is often essential. But it is coordination overhead — formatting, routing, transcribing, reconciling, monitoring, summarizing — that accumulated because, until recently, humans were the only cognitive platform available. The arrival of prediction machines and the maturation of logic machines means that the structural rationale for these assignments has evaporated. The assignments persist because nobody tested whether they should.
The Physics tests them. And when it finds AI Handoff Work, it does not advocate for layoffs. It advocates for role distillation — liberating human hours from work about work so that people can be redeployed to the judgment-intensive, connection-dependent, consequence-bearing work where human authority is not just appropriate but irreplaceable. The Efficiency Dividend is not a headcount reduction. It is a capability expansion.
The Grammar catches the errors everyone can see. The Physics catches the errors everyone is making.
The Language of Work
The Physics is Stage 2 of the Compiler — the context-dependent layer that complements the context-free Grammar. The Language of Work provides a complete architecture for describing and validating work allocation:
- Vocabulary: The Four Platforms define who performs work. The Nine Verbs define what operations work consists of.
- Grammar: The Capability Matrix defines which platform-verb assignments are structurally valid — Stage 1 of the Compiler.
- Physics: The Physics of Work (this page) defines which assignments are sustainable given platform constraints — Stage 2 of the Compiler.
- Compiler: The Compiler runs both stages as a single validation pass.
Related Concepts
- The AI Readiness Scale — The three categories of work that emerge from Physics-validated analysis
- The Stewardship Spectrum — The five-tier governance model that translates Physics findings into deployment architecture
Further Reading
- The Four Kinds of Actors — The hard constraints of each actor type: human vigilance limits, software brittleness, hardware physics, AI confabulation
- The Backstory Behind “The Distillation of Work” — Why theoretical AI capability (60-80% of tasks) diverges so sharply from deployment reality (17%)
- Will AI Oversight Be the New Email Inbox Burnout? — How adding AI creates new coordination overhead that the Physics must account for
The Physics of Work is diagnostic, not prescriptive — it identifies where your organization is wasting human capacity, but the reallocation strategy depends on your specific context, workforce, and goals. Seampoint works with leadership teams to run the Compiler against live workflows, quantify the Efficiency Dividend, and design role distillation plans that expand organizational capability without disrupting continuity.