Robotaxi’s Million Dollar Question: Who’s Driving?
Human teleoperators play crucial roles in supporting the safety of robotaxis. Yet there's no legal clarification if a remote assistant gives a robotaxi a bum steer, ending in a crash. Who was driving?
(Image: Waymo)
When a Tesla robotaxi with no human driver in the cabin wandered haplessly this week into an Austin, Texas construction zone and got stuck with no way out, the passenger had to call a teleoperator. Asked if she could see what’s happening to the robotaxi, a female support person calmly said, “Oh, I do see this.”
It took roughly four minutes for the Tesla robotaxi to first back up, then move forward, and eventually bumbled its way out of the box. Meanwhile all the heavy-duty machine operators slowly made space for Tesla.
I could almost hear Elon Musk saying, “Nobody got hurt, so, what’s the big deal?
The big deal is we don’t know who was driving the car remotely as it zigzagged out of the mess.
A week ago in testimony at a Senate hearing, Mauricio Peña, Waymo’s chief safety officer, acknowledged that Waymo’s robotaxis are supported by remote humans.
Peña stressed that those operators in the Philippines — presumably hired by Waymo’s third-party service supplier — are not actually driving vehicles. Rather, they are giving the vehicles “an input” or additional information. He emphasized that this is how “the Waymo vehicle is always in charge of the dynamic driving task (DDT).”
Peña’s revelation, nonetheless, opened the door to a new line of questioning. What different roles do “remote assistants,” “remote monitors” and “remote drivers” play? Do they bear different levels of responsibilities?
Also left unanswered is a bigger question: Who is the driver, legally speaking, when a vehicle is remotely advised, and moved?
Regardless of whether it is remote driving or remote assistance, these human operators play crucial roles in supporting the safety of robotaxis.
Yet, curiously, there remains no legal clarification for what happens next if a remote assistant gives a robotaxi a bum steer, possibly ending in a crash.
If this happens to Waymo, who sits on the legal hot seat? Will it be the Waymo Driver (Computer Driver developed by Waymo), the company Waymo, the remote assistant or a third-party outfit that runs remote teleoperation services for Waymo?
Put simply, if I’m a victim, whom do I sue?
Not knowing the answer to who’s driving guarantees a long, drawn out, unfathomable legal battle even before I set foot in a courtroom to accuse the driver.
In hopes of sorting out the ambiguous issues, I asked the help of Susanna Gallun, an attorney and transportation policy researcher at the University of Texas at Austin’s Center for Transportation Research (CTR).
Gallun is one among the legal experts studying “who the driver is.” She said, “I’ve been writing about the issue since 2019.” She called the topic “a moving target,” and “tracking 50 states’ AV legislation in real time” is no cakewalk.
Below is my email conversation with Gallun.
Q&A with Susanna Gallun
Junko: Waymo appears to put so much emphasis on differences between a “remote driver” and a “remote assistant.” Can you walk me through the different types of teleoperators that are being discussed and defined?
Susanna Gallun: There are three remote categories to consider. Those three types are remote 1) assistance, 2) monitoring and 3) driving.
Junko: Who categorized each type and how is it defined?
Gallun: The SAE International’s J3016 (as of April 2021) explicitly defines Remote driver as a driver who is not seated in a position to manually exercise in‑vehicle braking, accelerating, steering, and transmission gear selection input devices (if any) but is able to operate the vehicle. Furthermore, remote assistance/ remote assistant is a human who provides guidance to an Automated Driving Systems (ADS) when the ADS does not know what to do, but who does not perform the dynamic driving task (DDT). Typical examples include clarifying scene context, identifying an object, suggesting a route or action, authorizing the system to proceed.
The SAE J3016 does not define a separate term “remote monitor” or “remote supervisor.”
However, the J3016 treats “monitoring the driving environment” as one of the core DDT subtasks, along with lateral and longitudinal vehicle motion control. When a human remote driver actually performs these DDT subtasks (monitoring, steering, braking, etc.), that person is the driver under J3016, even though they are remote, and the scenario is “remote driving.”
When a human provides non‑DDT information or high‑level guidance (e.g., confirming an object [in the road] is an empty bag and that it is safe to proceed; granting permission to reroute), J3016 classifies this as remote assistance, not driving. “Monitoring only” (passively watching telemetry or video and waiting for alerts) is not itself a distinct DDT role in the taxonomy unless and until that human either performs DDT (remote driver) or provides remote assistance.
Junko: Okay. SAE International created these creatures and developed the taxonomy. How does the legal world see the responsibilities of all these remote operators?
Gallun: Traditional respondeat superior (Latin: “let the master answer” is a legal doctrine, most commonly used in the tort law) would treat an employee teleoperator as a human driver for negligence purposes, with vicarious liability for the employer.
But AV statutes that define the Automated Driving Systems (ADS) as the “driver” complicate this and leave remote assistance in a gray zone.
Emerging standards like Automated Vehicle Safety Consortium (AVSC) guidance and the TÜV SÜD process audits focus on the remote assistance programs as a safety function.
But they do not yet resolve, in a doctrinal way, who counts as the legal driver when remote assistance information changes the vehicle’s behavior.
Junko: That leaves most everything ambiguous. I suspect that current driver law (never written for computer drivers) creates further complications in defining who a remote driver is.
Gallun: Where a statute expressly defines a “remote driver” as the legal operator, liability largely looks to ordinary driver law. (made for humans)
Where the statute instead declares the automated driving system (ADS) to be the operator, such as in Texas, traditional driver-based liability disappears and is replaced by product- and enterprise-liability theories.
Junko: Wow. There must be so many variations in statues among 50 different states…
Gallun: For instance, Alabama is problematic for companies trying to escape liability.
Alabama’s teleoperation statute expressly provides that a person who remotely operates a commercial vehicle is the operator of the vehicle for purposes of traffic and criminal law.
Junko: What is the legal consequence then?
Gallun: The remote driver in Alabama is treated exactly like an in-cab driver.
Ordinary negligence can apply directly to that person. (who might be sitting in the Philippines, non-citizen with no state driver’s license for that state).
Traffic offenses, DUI provisions, accident-reporting duties, and criminal motor-vehicle statutes can possibly attach to the remote human. Also here, the motor carrier or teleoperation company can remain liable under traditional vicarious-liability doctrines (respondeat superior/agency).
Product-liability claims against the vehicle, ADS or teleoperation system remain available, but do not replace the human-driver layer. Once the statute names the remote driver as the operator, courts can apply familiar driver and carrier liability frameworks.
AV companies like staying in the product liability lane, they do not want to be in the negligence lane of the law.
Product liability vs. Negligence
Junko: This reminds me of what Phil Koopman told me. He believes we should establish a duty of care for computer drivers equal to that of human drivers.
Gallun: Traffic law assumes a human driver. Our legal framework is going to have issues. It demands a paradigm shift.
Core duties in vehicle codes, exercise due care, obey and interpret traffic controls, yield, avoid collisions, are all written for a natural person making real time judgments. An automated driving system is not recognized as a legal “driver” with affirmative statutory duties. Negligence doctrine is built around human conduct. Tort law evaluates breach using a “reasonable” person/driver standard. When software performs perception and control, courts lack a stable reference point for what “reasonable automated driving” means.
Junko: So, you are saying that we are still far from establishing that a computer driver bears the same “duty of care” required of people…
Gallun: On state AV laws, crashes are being forced into product-liability boxes. Instead of negligent operation, cases are pushed toward design defect, failure to warn, or negligent deployment, all doctrines built for consumer products, not continuously learning, operational decision systems.
Here is where the transparency arguments come in. How does anyone know what the ADS was “thinking” when it messed up? You can’t interview the ADS on a witness stand if it killed someone. We must keep civil, traffic and criminal law in their lanes too.
Bottom line
To prove that an autonomous vehicle is a defective product, liability litigation will require reverse engineering of the software. This solution is ungainly, said Koopman. “Product liability is too heavyweight, too expensive, and is unlikely to actually resolve in favor of individual victims,” he noted.
Remote human teleoperators are monkey wrenches in the already clunky liability engine of autonomous vehicles. They only complicate what ought to be a simpler process for plaintiffs to settle safety issues in a robotaxi crash.



