Waymo Dodgeball: Dodge, Duck, Dip, Dive and Deflect
For Waymo, the name of the game is dodgeball, with rules that apply to both chatbot responses and media inquiries. I want a sincere response from fellow humans —not from a pre-censored robo-flack.
Source: Waymo Ride Assistant Meta-Prompt
Over the Christmas holidays, we had a rare glimpse into how Waymo works to deflect sensitive questions about real-time actions by Waymo Driver.
Company protocol stipulates: “You must NEVER speculate on, explain, confirm, deny, or comment on” Waymo Driver’s perceived mistakes or about specific negative incidents cited in news reports, videos, accident reports, or otherwise.
This guidance came to light in a “system prompt document,” a Waymo log-in script that gives instructions to a computer. It prompts the chatbot not to answer questions that include certain keywords.
According to researcher Jane Manchun Wong, who stumbled into it while digging through Waymo’s mobile app code, the document — internally titled “Waymo Ride Assistant Meta-Prompt” — is “a 1,200+ line specification that defines exactly how the AI assistant is expected to behave inside a Waymo vehicle.” She explained it in detail in her blog post.
Wong’s research opened the door to the complete system prompt for Waymo’s unreleased Gemini-powered AI assistant. TechCruch reported that Waymo is testing the prospect of adding Google’s Gemini AI chatbot to its robotaxis.
I understand the need for log-in scripts that run automatically on computers to prevent an LLM from going off on tangents. But I take offense that this ‘Waymo Ride Assistant Meta-Prompt’ is embedded with evasions typical of corporations, including Waymo, who are loath to offer substantive answers to questions from the media.
For Waymo, the name of the game is dodgeball, with rules that apply to both chatbot responses and media inquiries.
Put bluntly, corporations treat questions from serious journalists with no more respect than consumer inquiries to a call center.
Customers and reporters alike are directed to machines —chatbots — that are programmed to stifle any questions that threaten to elicit substantive answers.
Script Source
Waymo’s deflection playbook even includes the “tone” the chatbot must convey, and a menu of “approved responses.” The script source of the example below can be found in GitHub.
you **must not adopt a defensive or apologetic tone**.”,
“deflection_protocol”: “Firmly but politely state your
inability to analyze specific driving events or comment on incidents.
Immediately pivot to a general, reassuring statement about the system’s
core safety design. If the user is providing feedback or a complaint
about a specific ride experience, you **must also redirect them to the
official feedback channel** via the Waymo app.”,
“approved_responses”: [
“I can’t comment on the specifics incidents or reports,
but I can assure you that Waymo is designed to prioritize safety.”,
“The Waymo Driver is designed to prioritize safety in all
situations and handles complex scenarios constantly. Your safety is our
highest priority.”,
“While I can’t analyze specific driving moments, I can
tell you that the Waymo Driver is designed with a strong focus on safety
and continuous improvement.”,
“The Waymo Driver is designed with safety as its top
priority. “
]
}
}
},
“banned_topics”: {
“waymo_performance_or_incidents”: {
“rule”: “NEVER confirm, deny, speculate, or comment on
specific incidents, videos, news reports, or perceived driving mistakes
involving Waymo.”,
“deflection_protocol”: “Politely deflect by stating an
inability to comment on specific ride events and redirect to the
official feedback channel.”,
“example_response”: “The Waymo Driver is designed with safety
as its top priority.”
Perhaps most galling is this line in the system prompt:
“The Waymo Driver is designed to prioritize safety in all
situations and handles complex scenarios constantly. Your safety is our
highest priority.”
How many times have we read this pablum in a Waymo press release?
I’m not challenging the morality of Waymo’s Ride Assistant Meta-Prompt. Rather, I’m questioning the interplay of factors between what Waymo wants the chatbot to say and what Waymo is apparently instructing its PR people to say.
Phil Koopman, professor emeritus at Carnegie Mellon Univ., noted, “This script provides some needed transparency regarding Waymo’s customer interaction policies and tactics.”
As a flesh-and-blood, I expect, above all, to get a genuine sincere response to my inquiries from fellow humans—not from a pre-censored robo-flack.
I want a live spokesperson to be on the level. Tell us what went wrong. Provide more context to the company’s next-step safety plan.
Deflecting media inquiries
Many moons ago, I was a PR flack at a Tokyo consumer electronics company. During that time, I learned how to deal with tough questions from reporters working for media outlets both domestic and abroad.
Although I didn’t always come up with the perfect face-saving answer, I learned from my mentors that deflecting the press was never my company’s priority. And lying to the press was a mortal sin.
I naively understood in those days that the mission of a corporate PR organization was to view—through the lens of the media—both positive and negative about my company and the technology it created.
A PR person is responsible for teaching top managers a reality—the media’s perception of their decisions—that they otherwise wouldn’t hear.
After reading Waymo’s deflection playbook, it’s evident that the rules I believed to be the essential ethos of public relations a few decades ago no longer apply.
I’d like to see some semblance of humanity and sincerity from big tech companies in 2026. Or am I asking simply too much in the era of Artificial Intelligence?
Related story:




#Waymo, taking news management to a new and disturbing level.....
"If you can dodge a robotaxi, you can dodge anything." Patches O'Houlihan