Vehicle OEMs Face AI-Accelerated Disruptions
Centralization, Software-Defined Vehicles, and End-to-End will present three formidable challenges to the survival of incumbent OEMs, and to vendors such as Arm who must support all of the above.
(Image: iStock)
Junko and I spoke recently with Suraj Gajendra, VP for the automotive business at Arm, about the growing impact of AI on the auto industry. You can see an edited version of our conversation below (on our YouTube channel)
As we spoke, it became clear that vehicle manufacturers are facing three simultaneous and profound changes in the nature of their products. Any one of these shifts would be enough to keep engineering VPs awake at 3 am, but together they form a serious obstacle for both incumbent OEMs and newcomers.
Centralization
First, the intrusion of AI into vehicles is accelerating what had been a gradual trend in vehicle system architectures: centralization. Today, luxury cars contain dozens of processors of all sorts, running several different operating systems and spread across multiple electronic control units (ECUs). This allows physical separation of different functions and close matching of processor hardware to workloads. It also physically separates real-time tasks from uncritical workloads and—crucially—safety-critical tasks from potentially insecure tasks.
For reasons that will become clear, this highly distributed—and highly evolved—architecture is morphing into a single cluster of CPU cores in one ECU. All sensors will feed the cluster, and it will drive all actuators, displays, speakers, and the like.
For the manufacturer, this is a blessing and a nightmare. It vastly simplifies system hardware.
But all today’s different software stacks built around different requirements have to be ported to, or in some cases reimagined for, a new heterogeneous cluster architecture with new OS and middleware. Vendors with decades of experience and legacy code may struggle to make the adaptation, while an EV startup with a fresh start and a few hundred eager young programmers may have a significant advantage.
For Arm there are challenges as well. Licensing embedded processor-core IP for application-dedicated SoCs such as engine controllers or wiper/washer managers is going out of style. The challenge now is to help the OEMs create centralized clusters that can support all these diverse workloads and can excel along four axes at once: functional safety, security, quality of service, and scalability. Arm’s expertise in cluster architectures, built on long relationships with datacenter chip developers, is gradually drawing the company into providing subsystem, rather than individual core, IP to the automotive OEMs.
A matter of definitions
A second sea change in design is taking place with, and helping to drive, centralization. This is the progression from hardware-defined to software-defined vehicles. Traditionally, every facet of the driver’s and passengers’ experiences—from performance and handling to cabin temperature—was determined by the vehicle hardware, as put together on the assembly line.
As more vehicle systems became electronic, some aspects of the experience became user-selectable. Driving profiles could alter transmission shift points and suspension dynamics, for instance, so the driver could choose between economy, comfort, and sport profiles. Now, designers are discussing so-called software-defined vehicles, in which everything from drivetrain performance to read-seat entertainment is under software control. In principle a driver can decide the personality of her vehicle on each leg of a trip: Maserati to the next town, limo to take grandpa to his doctor’s appointment, SUV for the grocery run.
Gajendra suggests a further step in this evolution. A specially architected and trained generative-AI model could take in a wide range of inputs, including purpose of the trip, traffic and weather, time, driver and passenger preferences, and driver biometrics. The model could then generate an AI-defined vehicle continuously adapted to the tasks, the external conditions, and the emotions of the humans.
For traditional vehicle designers this can sound like crazy talk. But in a market where infotainment has replaced performance, safety, and efficiency as a key selling point, it makes sense. Here the incumbent OEMs, with their long experience in delighting—or infuriating—customers may have an advantage—if they can capture their experience in an AI model.
For Arm, the challenge centers on that AI model. Even with a carefully constrained, hand-optimized model, there will be a significant addition to the vehicle workload. The new tasks imply the need for AI accelerator hardware in the processing cluster and huge—by automotive standards—memory bandwidth. Those in turn imply a big jump in power consumption. All of these challenges will filter through the software stack and eventually land on the screens of the hardware developers.
End to end
An emerging debate is promising a third fundamental shift for OEMs: the concept of end-to-end vehicle control. Today’s vehicle automation systems, from smart cruise controls to robotaxis, almost all use modular software architectures. For instance, one module may preprocess and fuse sensor data. A second module would then identify and tag objects, while a third used the tagged data to select a trajectory. A fourth module would convert the trajectory to actuator commands, while a fifth monitored the whole pipeline for failures or safety violations.
The problem with this, Gajendra observes, is that each of these modules has to be coded, maintained, and regression-tested—a huge, continuing burden of software development. The alternative being proposed by some OEMs is end-to-end control. A single specially designed AI model—probably a streamlined model resembling today’s generative AI structures—would take in all the inputs from the vehicle and directly control the vehicle actuators and displays. It might even converse with the driver to gather further input.
There are lots of reasons to be skeptical of this approach. But there are two compelling benefits for the OEM. First: once the model is coded there is in theory no further software development necessary. All updates are made by retraining the model. Second is an article of faith: that a sufficiently complex model, sufficiently trained, will eventually outperform any human-devised system.
For Arm, the challenges are many. First is scale: just how big would this model be? Today’s much more general generative AI models are too enormous to contemplate hosting in an ECU.
Second, how rapidly must the model update? That will set a baseline for compute and memory performance. Then there are architectural issues. How will actual control of drive train, steering, and braking be partitioned between the AI model and real-time domains? What kinds of isolation will be needed between the AI domain and functional-safety and security zones?
As answers emerge, they will impact Arm’s subsystem design as well as its IP developments.
Certainly there will be questions about the existing, proven subsystem architecture, and there will be unremitting pressure for more performance, more memory capacity, and lower power consumption.
These three evolutions, taken together, will present formidable challenges—to the survival of incumbent OEMs, to the emergence of challengers, and to vendors such as Arm who must support all of the above in their sometimes flailing pursuit of answers.
#AIToyToTools video podcast series — Part 1 of Arm episode:
Is LLM useful in automotive applications?
#AIToyToTools video podcast series — Part 2 of Arm episode:
Full Cloud-to-Car Architecture
This episode will drop on Thursday, Oct. 16 on Junko’s Talk to Us YouTube channel.
Please make sure to subscribe to the channel so that you don’t have to miss the next episode!





While not necessarily pertinent to the point of the article, I must admit that I reacted to the phrase "application-dedicated SoCs such as engine controllers ... is going out of style". In my experience, though this understanding is somewhat pervasive in the industry, it is not aligned with my experience. I understand where it is coming from but I'd like to offer a couple of points to add some layers to the topic. The first is perhaps only nitpicking at terminology. Things like engine controllers, traction motor controllers, battery managers, Chassis controllers and many others, today tend to use MCUs as opposed to SOCs. In addition to the differences in core technologies (think ARM-A for SOCs and ARM-R and ARM-M for MCUs), these MCUs tend to use embedded non-volatile memory for Execute-in-Place operation and specialized I/O to support the myriad of hardware interfaces to sensors and actuators. The second point: Even if an OEM decides to centralize the strategy software for these functions, the need for the specialized I/O still exists which in-turn suggest continued usage of MCUs - especially those with specialized I/O. As an example: IP such as Generic Timer Module (GTM) is useful to handle time-varying nature of rotating devices like engines and motors. So, automotive MCU providers like Infineon, NXP, Renesas, ST-Micro, Microchip and others, will still have product to sell in the SDV world.