Tesla’s Robotaxi Checks All Boxes for What Not to Do
What I just saw in this video is appalling. But nobody got killed. So should we conclude this incident was a harmless glitch and a happy Tesla ending?
I don’t make a living by recording every Tesla robotaxi ride in Austin, Texas. But I follow gratefully those who do. They are the best hope for keeping Tesla honest.
However, with reports that Tesla is starting robotaxi service in San Francisco this weekend, according to Business Insider, it seems evident that Tesla simply doesn’t care about what the public says.
Above is a video shot by Sandy Munro and other engineers from Munro & Associates, an engineering consulting firm. Monroe was invited by Tesla to experience “in the wild” a Tesla robotaxi ride in Austin.
Speaking of his ride experience, Monro wrote, “It didn’t disappoint (well, mostly).”
Maybe you agree.
Me? I’m appalled.
What I saw in this video (start the video at around 4:05) is a robotaxi whose driving behavior is akin to that of an out-of-towner who doesn’t know how left turning lanes are laid out in Austin or—worse—a 16-year-old student driver who freezes in an intersection and “blocks the box.”
I’m singling out this video because it’s a clear illustration of the poor choices Tesla has cumulatively made in letting their full self-driving (FSD) vehicles run on public streets.
It checks all the boxes of what not to do. Clearly, Tesla is not overthinking “safety” either for passengers or vulnerable road users.
Let me break it down.
1. First, Tesla’s robotaxi, in the wrong lane to turn, decides to take an unprotected left turn anyway. Visible are two left turning lanes — leading up to the intersection — disregarded by the Tesla, with a concrete barrier in between. This is apparently common in Austin, but Tesla’s robotaxi had no clue.
2. Before the Texas robotaxi launch, we all thought that Tesla had upped its safety props by giving their vehicles a crash course on the ODD and road layouts of Austin. The goal was for Telsa’s robotaxi to drive as confidently as a professional cab driver who’s a veteran of the local streets. But this wasn’t the case.
3. Next, when the Tesla starts to turn left from the wrong lane, a “safety monitor” inside the robotaxi intervenes.
4. He does this by hovering over the screen, next to the driver’s seat, and pushing a tiny stop button. Note how he struggles because the stop button on the touch screen is very small. This ain’t a BIG RED STOP button. Instead, it’s yet another of the human-machine interface (HMI) issues that tech companies seem chronically unable to solve—or even perceive.
5. Why the safety monitor hit the stop button is puzzling. Had he not intervened, would Tesla have successfully turned left (illegally) without blocking the box? We don’t know.
6. The question to ask next is what exactly is the job of this in-vehicle “safety monitor.” He can obviously stop the car but, from the shotgun seat, he can’t steer. Tesla describes his role as “Safety Monitor.” How is that different from “Safety Driver?”
7. According to the video, the robotaxi comes to a complete stop at 4:35 in the middle of the intersection. The vehicle parks in the intersection—in T-bone territory—for another 43 seconds. If you don’t think that’s a long time, count it out while picturing a line of cars accumulating—and honking—on your right.
8. So, who eventually moved the car? Is that Tesla’s remote teleoperator?
9. Remember, Telsa proudly shared via social media images of a roomful of remote teleoperators at its robotaxi launch. If a remote teleoperator eventually saved the car from blocking the box, what took so long? Was Tesla having the remote driver latency problems predicted by Missy Cummings, professor at George Mason Univ.? Maybe the teleoperators were on a toilet break, or shmoozing by the water cooler?
10. Since the oncoming traffic didn’t crash into the Tesla, nobody got killed. So should we conclude this incident was a harmless glitch and a happy Tesla ending?
Or is it a harbinger of worse screwups yet to come?
In the end, “There are more unknowns than knowns,” as Bryan Reimer, MIT research scientist told me on the phone. Instead of calling this a failure of automation, Reimer prefers to term it a systemic failure. “The systems required to support the automation failed to work.”
Last question: How is it that investors keep investing and regulators keep neglecting regulate a technology that has been already deployed, posing a threat to human safety, although it remains prone to “systemic failure?”
This video is chilling. That the evaluators were so untroubled and nonchalant about such a significant failure is very troubling. This vehicle/system is not ready for prime time.