The Blurry Convergence of AV and AI
There are myriad parallels in media coverage of AVs and AI. But willfully missing from the public discourse are two inconvenient truths: “Operational cost” and “safety.”
Alex Roy, a well-respected autonomous vehicle expert, posted the following this week on LinkedIn.
“2015: 99% of self-driving car stories are BS.
“2025: 99% of AI stories are BS.
“Don’t contribute to the BS.”
I followed the emergence of robotaxis and continued to cover it, reporting on the spin cycle of companies’ claims and counter-claims. This can be exhausting. It can tempt reporters to take the easy way out, just parrot corporate P.R. Transcribe the propaganda and wash your hands.
Welcome to the blurry convergence of stenography and journalism.
This trend, muddling the coverage of self-driving cars, has migrated to AI.
But first, a review of self-driving cars.
It feels quaint now. But I remember the early days, when companies claimed that human “drivers” could sleep in the back seat while being autonomously transported from point A to point B.
The assumption was that AVs would be capable of intuitive and responsible driving.
AV occupants, during their commute, could read email, work on presentations, eat pizza, watch movies, play video games. That “convenience” marketing myth lives on. In 2022, when Mercedes-Benz launched Drive Pilot, a Level 3 conditionally automated driving system, the German automaker told consumers to buy “the world’s first” L3 vehicles. Drive Pilot could “give back time.”
Implicit was that with Drive Pilot turned on, the L3 vehicle does the driving. You don’t have to be involved.
Meanwhile, robotaxi companies began amplifying the “safety” narrative that justifies their AV business. In Cruise’s 2023 full-page New York Times ad, the company famously said:
“Humans are terrible drivers. You might be a good driver, but many of us aren’t. People cause millions of accidents every year in the US. Cruise driverless cars are designed to save lives.”
This assertion begs two questions. Do we really need AVs? Regardless of any actual need, who stands to benefit most?
Let me be clear. Hope springs eternal for the future of AVs. But when robotaxis and computer drivers must coexist with human pedestrians, cyclists, drivers driving non-self-driving cars, compatibility is really hard. Many people, who are not machines, deserve better answers and more convincing data on the unproven proposition that AVs’ “save people’s lives.”
Rollercoaster ride
In the shortspan of 10 years, the public perception of self-driving cars has swung back and forth between “it’s really just around the corner” and “this won’t happen in our lifetime.”
Now, with the sudden emergence of DeepSeek, AI is riding a similar pendulum.
The DeepSeek news was initially hailed as “AI’s Sputnik Moment” (Marc Andreessen). The story then changed to: “DeepSeek scraped OpenAI’s data” (Groq CEO Jonathan Ross) and created a cheaper version of ChatGPT.
The media’s DeepSeek assessment took only three days to completely reverse. The punchline, rarely considered by reporters pursuing a “scoop,” is that technology moves fast. So does the spin on the tech advancements. We don’t know that the latest is the last word, but it probably isn’t.
Black box
Both AV and AI are susceptible to “quick takes” and to the temptation for dramatic, even definitive headlines.
This sort of rush to judgment derives from the black box nature of the technologies used inside robotaxis or large language models (LLM).
Hardware building blocks can be explained. In a fully autonomous vehicle, key elements are superior sensing technologies and powerful central processing SoCs. AI models for training and inference depend on an amazing number of processors such as Nvidia’s GPUs.
But whether robotaxis or AI models, hardware performance, or even the software (number of tokens) claimed by technology suppliers does little to describe how effectively a product operates. “Systems” or “models,” after all, must unfortunately interact with humans, living in an analog world that follows rules, laws and customs created by people.
Machines can’t operate in a vacuum, just swooping in and enforcing their own logic.
Outstanding questions remain both for self-driving cars and AI.
Are AVs safe? When “normalization” is a crucial data preprocessing step in AI, particularly in machine learning, how well can AI respond to edge cases? When AI hallucinates, how does the machine recognize its own delusions?
China, the boogeyman
China is your buddy if you want to build momentum for AVs and AI.
That “the United States is already behind China” in AV deployment is a familiar falsehood. Keeping pace with China is leverage for AV companies and their lobbyists when asking for more funding and regulatory approvals for public robotaxi operations.
But what if China isn’t the boogeyman anymore?
DeepSeek’s emergence struck a nerve in Silicon Valley, precisely because it was born and developed in China. If DeepSeek proves indeed that the performance of their AI models are equivalent to ChatGPT by using less resources, it challenges two basic tenants of the U.S. AI buildup: 1) Using the more powerful the GPUs in the bigger volume, the better outcome they can generate for AI models, and 2) the more gigantic data centers we build, the faster we advance AI.
*******
Two inconvenient truths
There are myriad parallels in media coverage of AVs and AI. But willfully missing from the public discourse are two inconvenient truths: “Operational cost” and “safety.” Or “scaling” vs. “precision.”
Without solving issues associated with the development and operational cost of AVs and AI, and by kicking the can down the road, it will simply take longer before these promising technologies can be used safely in real-world applications.
Still today, AVs need extensive infrastructure to support better AI training.
Take Dojo, a Tesla-designed supercomputer for computer vision video processing and recognition. It is used for training Tesla's machine learning models to improve its Full Self-Driving (FSD) advanced driver-assistance system. While Tesla is reportedly spending $500 million to bring its Dojo supercomputer project to Buffalo factory, Elon Musk noted on X last year that “Tesla will spend more than that on Nvidia hardware this year (2024). The table stakes for being competitive in AI are at least several billion dollars per year at this point.”
Further, there is the operational cost of running robotaxis to consider.
Even when robotaxi companies eliminate human drivers, they still require “teleoperation,” a service in which a human “driver” remotely monitors and can take control of an AV. Phil Koopman breaks down details on “remote drivers” on his Substack.
Last December, when CEO Mary Barra announced GM’s decision to shut down Cruise, she cited the high operational cost. Talking to analysts, she noted, "You've got to really understand the cost of running a robotaxi fleet, which is fairly significant, and again, not our core business.”
It’s a mystery to me why GM didn’t anticipate the problem. Indeed, less than two years before, Barra was saying that Cruise could generate $50 billion in annual revenue by 2030.
In a webinar this week hosted by Embedded AI and Vision Alliance, Ian Riches, vice president of global automotive at TechInsights, shared his robotaxi thoughts.
“There were a significant number of robotaxi vendors at CES making a lot of noise. But it is baked into our models that this will be a relatively slow rollout, because of that capital intensive nature [of this business.]”
He described a long process of “moving from city to city, of fine-tuning performance [and] of achieving scale in that city” to find solutions that are actually viable.
Riches cited the futility of deploying five trial vehicles in a city of a million people. He explained, “So you need to go large, which requires, potentially hundreds of millions, potentially even billions of dollars investment for each city that you move into.”
Then, there’s the safety issue – or Mean Time Between Failures (MTBF).
Citing Tesla, Riches emphasized that it is not necessarily increasing the precision of its robotaxis in terms of MTBF. Tesla's challenge, he said, is “moving toward a much more robust system within the operational design domain they already have.”
In contrast, Waymo is “actually being very good at precision.” The problem, he added, is that Waymo needs to expand its vehicles’ availability, because of the high cost it faces.
In sum, a robotaxi business once viewed as a necessary pivot for automakers serious about selling mobility as a service hasn’t met the glowing expectations of management consulting companies.
Where are we on the road to AGI?
Regardless of OpenAI or DeepSeek, the AI community “hasn’t solved hallucinations or problems with reliability,” as Gary Marcus noted on Substack.
Just as self-driving car developers are quietly practicing autonomous interruptus, so are the AI folks.
Lurking in the weeds is another issue: the extraordinary amounts of power and water needed to run the fast-proliferating network of massive AI data centers.
In my mind, the biggest offense committed by the AV and AI communities is their “for the greater good” mantra.
For example, AV proponents say that we need more robotaxis on public roads because AI needs to learn. More robotaxis means fewer drunken or inattentive human drivers on the streets killing people.
The AI community likewise touts proliferation as the road to perfection. But AI proponents have yet to offer a credible solution to AI’s biggest imperfection, its gluttonous power consumption.
Think about this. There’s no AI without data. Nor will there be any AI optimization without infrastructure capable of running diversified AI models. Demands for more AI models balloon the capital expenditure to support AI. If this is not a clear and present dilemma for AI’s gurus, what is?
Of course, AI promoters like Nvidia’s CEO Jensen Huang scoff at skeptics. In his GTC keynote speech last year, Huang envisioned AI “harnessing gravity to store renewable power, and paving the way towards unlimited clean energy for us all.”
Huang even thinks AI can teach “robots to assist to watch out for danger,” generate “virtual scenarios to let us safely explore the real world,” and “understand every decision…”
In short, say the gurus, don’t question or restrict AI development. Wait and see. AI will have answers to all the world's problems.
And AI has a bridge in Brooklyn to sell … cheap.