Agentic AI? Oh God, What Next?
"Agent engineers" are coming. It’s hard not to conjure a cartoonish image of a future workbench where an army of C-3POs mill amongst the few human engineers. What could possibly go wrong?
Ever since Nvidia CEO Jensen Huang started preaching that the next major leap in Artificial Intelligence is the transition to “agentic AI,” I was intrigued—while also rolling my eyes.
I said to myself: Oh God, what next? Is Leatherjacket saying that generative AI is already over the hill and the next big thing is agentic AI?
Well, maybe I’m just a lazy reporter complaining about more lessons to learn and more codes to decipher.
Finally, as I buckled down, I was a little alarmed by this idea of “digital engineers” supposedly enabled by agentic AI.
Last week at GTC 2025, an annual AI lovefest hosted by Nvidia in San Jose, Huang said that every Nvidia software engineer will have 10 AI assistants – or “digital engineers” — by year’s end.
Given the shortage of engineers among tech companies, this must be good.
But it’s hard not to conjure a cartoonish image of a future workbench environment in which an army of C-3POs mill and chatter Britishly amongst the few last surviving human engineers.
What could possibly go wrong?
Next, I saw in a keynote speech by Synopsys CEO Sassine Ghazi, outlining an agentic AI development framework at SNUG, the Synopsys User Group conference in Santa Clara, during the same week as Nvidia’s GTC.
Describing the progress of AI-driven chip designs, Ghazi drew parallels to the automotive industry’s SAE levels of autonomous driving.
I understand Ghazi’s urge to explain, after this fashion, the future in AI-driven chip designs. In any context, “levels” are a convenient, if not always accurate, way to claim progress.
Banking on the transition of AI capabilities from RL (reinforcement learning) to LLMs (large language models) to Agents, Ghazi foresees AI helping customers design big AI chips that have been increasing in complexity at breakneck velocity.
See the slide below he shared on the stage.
(source: Synopsys)
Here’s my interpretation:
* L1: LLM/Assisting — use AI as co-pilot
* L2: Agents/Acting — AI agents handle specific tasks
* L3: Multi-Agents/Orchestrating — Multiple agents orchestrate collaboration among them.
* L4: Adaptive Learning/Planning — Cross-project learning capabilities
* L5: Fully autonomous/Decision Making — autonomous chip design
Automated driving
Knowing what we know about the progress of autonomous vehicles, it’s common sense to expect in all the giddyup of navigating AI chip designs.
From covering the automotive world, here’s what I know.
L3 automated driving is widely recognized as inherently problematic due to the difficulties of machine to human handovers.
L4 might be possible but auto companies touting L4 capabilities such as Tesla are fudging. They call their alleged L4 models L2+ or L2++ because they don’t want to be held liable when the car runs over a puppy. (Remember, in L2, driving responsibility stays with the human, not the computer.)
As for L5 vehicles? Among automotive experts, the emerging consensus is that L5 is years, if not decades, in the future.
Before diving into the discussion of automated driving vs. automated AI chip designs, let me explore what exactly we mean when we say agentic AI.
I had the good fortune to talk with Synopsys executives, who were available at SNUG. Less hyperbolic than Huang, they discussed the adoption of AI models in their tool chains — from RL to LLMs to Agents — in a more practical and vigilant context.
Generative AI vs. Agentic AI
First, some fundamental observations. Generative AI and Agentic AI aren’t mutually exclusive.
The focus of generative AI is content creation. Its AI models can create coherent content, like essays and answers to complex problems, a la ChatGPT.
On the other hand, Agentic AI facilitates decision-making, problem-solving and autonomy.
Agentic AI is designed to autonomously decide, and act. It can pursue complex goals with limited supervision by humans. The industry expects Agentic AI to blend the characteristics of LLMs with the accuracy of traditional programming.
Fundamental technologies for agentic AI
I asked the Synopsys executives whether the industry already has the fundamental technologies to make agentic AI a reality.
Raja Tabet, senior VP of engineering excellence at Synopsys, replied that much of the technology already exists in the open-source community, donated by companies like Microsoft and Meta.
Three things are necessary to build agentic AI capabilities: models, memory and an orchestration framework. The trick is to find, in the open-source community, what to leverage and what to customize while identifying the areas where you can innovate, Tabet explained.
Armed with memory, Agentic AI knows the history between different assets from the language model. With its ability to call API, Tabet explained, “You're dynamically collecting information you need to execute on the task that the agent is charted to do.” This is like going from BOT to reasoning, but agentic AI is expected to be able to take an action, he added.
Automated chip design
So, in the context of EDA tool chains, how does agentic AI look and what, exactly, can it do?
Shankar Krishnamoorthy, Synposys’ chief product development officer, invoked “a pyramid.” What Synaptics has been doing over decades is the pyramid’s bottom layer, composed of tools to optimize design, route and place designs.
With Agentic AI, Synopsys plans to build “additional layers.” Krishnamoorthy said, “You’re building the model layer with reasoning. You're building the orchestration layer to essentially invoke multiple agents, and then very soon, you basically start to take an entire workflow, move it all into this pyramid so that the entire workflow can remove the human from the loop.”
Unlike with automakers, who have less exposure to technologies and software engineering, I take comfort from Synopsys’ claim for automated chip designs, if ony because the company is a heavy user of AI technologies.
As Ghazi noted, “We have hundreds and thousands of software developers inside Synopsys.” Because they develop and sell software, they are keenly aware of the importance of rolling out AI technologies prudently.
What I see is that agentic AI-driven chip design might not kill a user. But it could face fierce backlash from the user community, when something goes sufficiently haywire to significantly delay AI chip design.
What’s the difference between the previous generation of AI-driven tools (reinforcement learning) offered by Synopsys and what’s next?
Krishnamoorthy explained. If workflow needed 40 steps to design a chip, the previous generation was able to make “each of those 40 steps a whole lot better, while the 40 steps don't change.”
In the next phase — agentic AI —” You look at these 40 steps and decide maybe 15 can be done by agent engineers,” he explained. “You still need humans to be actively involved in the remaining 25 steps.”
When companies like Cruise and Waymo started offering robotaxis, they were cavalier in their advertising, claiming that humans are terrible drivers.
EDA companies know — or they’ve learned — better. Krishnamoorthy stressed, “It’s going to take a while to build a level of trust [among chip designers] and to build tools that can verify the outputs of agents.”
Who are agents, then?
Looking at the agentic world, it’s clear that agents aren’t humanoids shaped like C-3PO. They are, rather, software that serves specific functions.
Ravi Subramanian, chief product management officer, wonders, for example, whether we can have “verification agents, implementation agents, or power agents…”
In his opinion, the possibilities for agents depend on the valid outcomes humans can get from those agents. It’s critical for chip designers to think about their workflow, document it, and develop the necessary agent portfolio.
Bottom line:
On the surface, developers of autonomous driving appear far ahead in paving the way for automation. But along the way, automakers — and society — have learned how difficult it is to structure an environment where automation and human drivers/pedestrians co-exist. Automated technology can come up with wrong turns where no human would ever go. And then, there’s liability. When things go wrong, who’s accountable?
The EDA industry has embraced automation enthusiastically, but its leaders know too well that their customers — design engineers — are a tough audience. Their goal is not to lift the automation level as quickly as possible. It’s about providing tools that satisfy their customers’ expectations in accuracy and reliability.
If autonomous driving is our guide we are in trouble !! :-)
With all these new "flavors" popping up, how soon before we can get chocolate AI?