A Tale of Three AI Cars
Jeff Whatcott · November 13, 2025
It seems like every day we are being handed new, powerful cognitive instruments.
From the tools that generate our emails to the systems that analyze our data, we’re told these AI instruments are “co-pilots” or “partners.” This framing is seductive, but it’s also a trap. It encourages us to anthropomorphize these tools, to treat them as colleagues rather than what they are: sophisticated instruments that we must operate with skill and awareness.
A partnership implies a relationship. An instrument demands a protocol.
The test of any complex instrument—a high-frequency trading algorithm, an airliner’s autopilot, or an AI agent—is not how it performs in ideal conditions. It’s how it behaves, and what it demands of its operator, when it encounters the unexpected.
There is no better illustration of this than the self-driving cars that are increasingly common on our roads today. We lump them all together, but they represent three competing philosophies for how humans should interact with cognitive instruments.
The difference isn’t in the technology—it’s in the operating protocol. It’s about what your job is as a human, and what happens in the critical moment of failure.
The answer is best understood by looking at three distinct philosophies: two positive patterns for success and one anti-pattern of ambiguity.

The Subaru Model: An Augmented Operator
Subaru’s philosophy is built on a clear, rigid protocol: You are the operator. The AI is your instrument. Period.
The system (EyeSight) is a “Level 2” system, meaning it can help with steering and speed, but it is never in charge. Its job is to assist, not to take over. There is a clearly defined operator/instrument relationship. The synergy creates a human who is better, safer, and less fatigued.
But here is the brilliant, and often overlooked, part of their design: The system’s most important component isn’t the one watching the road. It’s the one watching you.
This “DriverFocus” system is a second instrument whose only job is to ensure the human is still being a skilled operator. It watches your head and eye position. The moment you get drowsy or look at your phone, it assertively alerts you with an audible tone and dashboard message.
Think about what this means. Subaru recognized that the biggest risk in using this instrument isn’t the tool failing—it’s the human operator getting bored. It’s “automation complacency.”
Subaru’s AI is a true instrument. It not only assists with the task, it comes with a second instrument to manage the user. They engineered a complete solution—a full stack—for the operative seam, assertively prompting you to remain the attentive, responsible operator as well.
And what happens when the instrument gets confused?
When a Subaru gets confused, it clearly and immediately hands control back to the human operator with an unmistakable alarm and visual dashboard message.
The handoff is the protocol. The instrument knows its limits and transfers control to the skilled operator, whom it has already verified is attentive. The operator’s job is never in doubt.
The Waymo Model: A Liberated Passenger
Waymo represents a completely different philosophy. It looked at the messy problem of the human-AI handoff and said, “Let’s eliminate the human operator entirely.”
When you get into a Waymo, there is no one in the driver’s seat. While the vehicle is a modified production car and has a steering wheel, it is functionally absent for you. It’s an instrument you have no access to and no responsibility for. This design choice—making the human a pure passenger—is the most telling. It’s not an instrument you operate. It is a service, powered by an instrument, that you hire. The “job” of the human is unambiguous: You are a passenger, supposed to be liberated to work, sleep, or relax. The AI’s job is 100% “Operator.”
This clean-cut protocol comes with a famous trade-off: the geofence that defines the well-mapped roads where the car is expected to perform reliably. The Waymo won’t go where it is not confident. But in this model, the geofence isn’t a weakness; it’s the enabling constraint. It is the non-negotiable “contract” that makes the synergy of being a passenger possible.
And what happens when the instrument gets confused?
When a Waymo gets confused, it stops.
Its protocol is to fail safely. It pulls over, puts on its hazards, and calls for a remote specialist to give it new instructions. It never “muddles through” and it never asks you—the passenger—to grab the wheel. The protocol is, once again, crystal clear. It doesn’t always work in real life (see video below), but at least the seam design is coherent.
The Tesla Model: An Ambiguous Supervisor
This brings us to Tesla. Tesla’s FSD is arguably the most technologically ambitious and capable of the three. It’s also, by far, the most dangerous, precisely because its engineering is incomplete.
Tesla is selling a partner but delivering an instrument.
This is where the engineering philosophy fails. Tesla built a world-class AI component, but an incomplete system that too often fails to manage the human-AI seam.
While the Tesla marketing has been somewhat ambiguous as to the role and responsibilities of the driver, their fine print has been clear:
“Before enabling Autopilot, you must agree to ‘keep your hands on the steering wheel at all times’ and to always ‘maintain control and responsibility for your vehicle.’“
The human operator is 100% responsible at all times. This is the legal and functional definition of a Level 2 system: the driver is 100% responsible at all times, even when the system is active. So a Tesla is a “Level 2” instrument, just like the Subaru.
But the “Full Self-Driving” marketing and the experience (it can drive you across a city) would seem to imply that it’s a Level 4 system, like the Waymo. It’s not.
This puts the human in an impossible, undefined role: The “Supervisor.”
A “supervisor” is a person who is 100% legally responsible for a task but is 0% actively engaged in it. This is a job that the human brain is physiologically incapable of performing.

Tesla Autopilot engaged on I-80 (source: Natecation, CC BY-SA 4.0)
The Tesla instrument is so good, so often, that it trains you to be bored. It lulls your attention to sleep. It conditions you to trust it as a partner. But it’s still just an instrument, and it can (and does) make mistakes. It then expects its “supervisor”—who has been mentally checked out for 20 minutes—to snap to full, life-saving attention in under a second. The National Highway Traffic Safety Administration has investigated whether Tesla’s recall adequately addressed this problem.
The original design of Tesla self-driving was very powerful, but was also blind. It could see and interpret the outside world with stunning sophistication, yet it was effectively blind to the state of the driver.
While the company included basic steering torque sensors its original configuration did not make use of the internal camera to monitor driver attention. When they later started using the internal cameras for this they lacked infrared illumination to allow driver monitoring at night.
Recent Tesla models have added infrared illumination so driver monitoring can work at night, but that didn’t stop Tesla from enabling “Full Self-Driving” features on older models that lack these features. Compared to Subaru’s decade-old “DriverFocus” system, Tesla’s current driver monitoring solution appears to be less effective at driver monitoring, even with all the catch-up effort.
This all suggests that the monitoring the human operator was not a primary focus of the initial Tesla design, and the company might still be somewhat conflicted about it. They appear to be struggling with a crisis of ambiguity. Their cognitive instrument has no clearly and consistently defined protocol. The human is an ambiguously defined supervisor.
So what happens when a Tesla in full self-driving mode gets confused? It muddles through.
It doesn’t have the clear “stop” protocol of a Waymo. It doesn’t have the clear “handoff” protocol of a Subaru. It encounters a weird shadow or a new construction zone and… tries its best. It muddles through the confusing, high-stakes moment, all while assuming its checked-out “supervisor” is ready to intervene.
This muddle through protocol is a direct result of an apparently incoherent and/or inconsistent design for the human-AI seam. We can see this incoherence continuing in the company’s own conflicting strategies for upcoming products.
As Tesla launches its “Robotaxi” service , it has to resolve the ambiguity to build a viable commercial offering. The model it has chosen is Waymo’s. The Robotaxi pilot operates within a defined geofence, and the planned “Cybercab” is being designed with no steering wheel, though they seem to be hedging on that now.
The Tesla team is smart and responsive to data, so they will likely find their way toward a well-designed seam between the human driver and the AI driving instrument. But their path to date is a pure expression of the problem of the “Ambiguous Supervisor”: a system that places the human operator in an inconsistent and undefined role, holding them responsible for the very ambiguity the system itself creates.
The Blueprint for Our AI Future
These three cars give us the full blueprint for the cognitive instruments we are integrating into our work. They are not just three products; they are two positive patterns for synergy and one anti-pattern of ambiguity.
The first is The Subaru Model (The Augmented Operator): a positive pattern where the AI assists the human and also helps the human remain a skilled, engaged operator, creating a synergy that makes the operator better and safer.
The second is The Waymo Model (The Liberated Passenger): a positive pattern where the AI fully takes over a defined, “geofenced” task, liberating the human for other activities.
Finally, there is The Tesla Model (An Ambiguous Supervisor): the anti-pattern. It’s the seductive “capable” instrument that feels like a partner but is an incomplete, incoherent system. This ambiguity is the most dangerous state. It invites the human to stop being an “operator” and to treat the tool as a “colleague,” lulling them into complacency while still holding them 100% responsible for the output.
This “Tesla Model” is the customer service chatbot that capably answers 99% of questions, but then confidently invents a fake refund policy for which a court holds the company liable. It’s the marketing AI that generates 99% of a campaign, feeling like a creative partner, but produces a “soulless” heritage ad that damages an iconic brand.
The real problem isn’t just the tool’s error; it’s the incoherent system that failed to account for the human operator at the seam, blocking synergy and creating risk instead.
As we use these new tools, the most important question isn’t “How smart is the instrument?”
It’s “What is the operator’s job?”
And the follow-up, which you must have an answer for, is: “What is the defined protocol when the instrument gets confused?”
If you don’t have a crystal-clear answer, you don’t have a defined tool. You have an ambiguous crisis waiting to happen.
This isn’t just about cars. It’s about your new AI-powered inventory forecast, your generative AI for marketing, and your Copilot for sales. The “Tesla Model” is seductive because it feels most powerful, but it’s an operational nightmare. It’s the chaos we see in most organizations—it pushes high-stakes cognitive work onto your team without a protocol, guaranteeing failure.
This ambiguity is the precise danger many theorists warn about. As Dennis Yi Tennen writes in Literary Theory for Robots:
“The danger of AI therefore lies not in its imagined autonomy, but in the complexity of causes contributing to its effects, further obscured by metaphor. It is crucial that we keep the linkages of responsibility intact if we hope to mitigate the social consequences of AI.”
Keeping those “linkages of responsibility intact” is the single most important part of integration. It’s not about how “smart” the AI is; it’s about the architecture of responsibility.
Building a “Subaru Model” (Augmented Operator) or a “Waymo Model” (Liberated Passenger) for your business requires this defined architecture. It starts by answering three non-negotiable questions.
First, what is the AI’s exact job? Is it a “Waymo,” fully automating a “fenced” task like processing standard invoices? Or is it a “Subaru,” assisting a human who remains the operator, like a fraud analyst reviewing flagged transactions?
Second, what is the human’s exact job? This is the step most companies miss. If the AI is an assistant, what is the human’s protocol? How do you ensure they remain the skilled operator? What’s their “DriverFocus” system to prevent complacency?
And finally, what is the failure protocol? When the AI gets confused—when the forecast looks odd or the generated marketing copy hallucinates—what is the defined handoff? Who makes the call, and what are they supposed to do?
When you have clear answers to those questions you’ve got a well-designed seam between the humans and the AI that makes upside leverage possible and contains downside risk. Take the time to design that seam.
More to read
The Four Actors in Hybrid AI Architecture
A framework for hybrid AI architecture: how Humans, Hardware, Software, and AI Agents must each be assigned to the right tasks for AI to succeed.
Most AI Projects Fail
Most corporate AI projects fail not because the technology is broken, but because organizations deploy AI without redesigning how work gets done. This article introduces a four-actor framework (Humans, Machines, Software, AI) for decomposing business processes and matching each action to the right actor. It is essential reading for leaders seeking AI strategy consulting that delivers measurable results, not expensive pilots.
8 Hours with AI and College Football Bias
A BYU fan tests five AI systems on college football rankings. The experiment reveals how bias shapes AI outputs—even when objectivity is the goal.
Where does AI belong in your processes?
Let us map the AI opportunities in your organization today.
Start a conversation