Edge AI: Separating Experiments from Deployments
The validity of Edge AI’s concept, technology and usefulness, is about to be tested in the real world as a tide of new devices getting deployed into commercial markets.
There’s a bunch of hard questions to answer before embarking on a commercial Edge AI deployment.
Here are a few:
• Are they useful?
• If so, useful to whom? Who will profit?
• Are the products one-offs?
• Do they scale?
• Which AI model does each run?
• What happens when the next AI model emerges?
• Which data set, among many, does each rely on?
• Who captured the data?
• How was it collected and analyzed?
• How accurate is the data set?
• Finally, why would anyone want this stuff?
Is Edge AI coming?
The IoT industry has spent more than a decade prophesying the emerging era of “Edge AI.”
The cynic in me used to regard “Edge AI” as a wedge for IoT companies to upgrade product lines while riding the hype cycle of a nascent technology.
Based on the persistent talking points of Edge AI marketing. I’m still right about that.
In its effort to sell Edge AI, the industry has explained the steps needed for implementing in an oversimplified straight line: 1) pick a hardware, 2) shop the available models and 3) choose the tools you need to deploy Edge AI.
Voila! Your very own Edge AI device … if the objective is to launch one-off devices that are pretty good at one function.
But, for a meaningful deployment of Edge AI that might bear fruit commercially, the above is a recipe for yet another disappointing IoT product on the edge.
Work in progress
Jim McGregor, Principal Analyst & Founder of TIRIAS Research, called Edge AI “a work in progress.”
“The vision from what we have in the cloud to getting that down to the edge has been a bit of a challenge,” he noted. “Most companies have looked [at Edge AI] as kind of a bolt-on solution … by adding an NPU, a new chip, or a new IP block.”
But Edge AI,” he stressed, “takes an entire platform – not only silicon, but the software, the tools, the models, and everything else.”
AI suitors are forced to spend too much time, said McGregor, “trying to figure out how we’re going to take the models, the information and the data that we have, and how we’re going to run them on this plethora of electronic devices to make them not only more connected, but more intelligent.”
Edge AI for the industrial market
The industrial market is a key segment where AI at the edge proves useful and effective, notwithstanding the complexity of implementation.
In a recent interview, Jamie Jeffs, industrial instrumentation director at 42 Technology, a consultancy working across energy, medtech, and industrial sectors, told us the process of “choosing hardware first” may be a “good place to start in experimenting with Edge AI.”
Calling this tactic a “lab-based Edge AI solution,” Jeffs cited the huge gulf that separates today’s Edge AI experiments and commercial deployment. The bridge must be a real process that demands rigor in engineering and serious data analysis upfront.
Scalability and Open Source
Two factors necessary for transition from Edge AI in the lab to commercial deployment are scalability and open source.
The last thing anyone needs is an Edge AI solution whose hardware, software and data sets are beholden to proprietary systems that force product designers to keep developing one-off solutions.
Consider the industrial market.
Even in the traditionally conservative industrial market, embracing AI has reached a “non-negotiable point,” according to Jeffs. AI fever has many organizations rushing to experiment with Edge AI. They all think, Jeffs said, that “we'll start deploying and see where it goes.”
He went on, “If companies have the capacity and the capital to be able to do that on a large scale, that's great.” But as he explained, “The biggest challenge we see with our clients is how they take that leap from a lab-based example right through to a scaled-up solution that can be deployed across multiple parts of a plant.”
Clients aren’t looking for one-offs, said Jeffs. The industry wants to leverage data sets that are openly available and scalable.
The case of Synaptics
Among the IoT chip suppliers yearning to play in the Edge AI segment is Synaptics, a midsize supplier of MCU/MPU in the embedded market. Synaptics is vocal in advocating scalability and open source as the linchpin to success in the nascent market.
John Weil, Synaptics’ VP and GM, for IoT and Edge AI, explained in our recent interview, “We see scalability as how I bring large, complex AI models down to smaller devices.” He added, “The trick is, if everybody has a different execution or Compute Engine, like a NPU or a CPU, GPU complex, it’s significant work for the embedded engineering team” figuring out how to put any given AI model onto different compute engines.
“There isn't a single scenario that works across, for example, over a $5, a $10 or $15 silicon product today. That's very divergent.,” he noted.
To bring a “similar” AI experience to each Edge AI product requires a “massive heavy lifts.”
To ease the development process of a family of Edge AI products, Synaptics aims to leverage open-source tools and hardware. Whether an engine is small, medium or big, “from a software tools point of view, they look identical,” said Weil.
“Yes, you will get different performances out of those engines, and you need to normalize your expectations for what you’ll get,” said Weil. “But our dream is that you'll be able to dial in that performance and make it more scalable. Whether it's $5, $10, $15, $20 products, there'll be something that gives you the same look and feel with a given AI model,” explained Weil.
Where does open hardware come in?
Synaptics has partnered with Google to leverage in the Edge AI market the hyperscaler’s initiative for its RISC-V-based open hardware.
Open-source hardware means that “a group of individuals and companies around the world are working much in the same way that people work on Linux,” explained Weil. “They make contributions, suggestions, pull requests—for lack of a better word—on what kinds of specs change.”
Because it’s extensible and open, Weil is convinced that “a lot of innovation in the RISC-V specification will allow AI models to continue to run efficiently on that compute complex.”
Notably, Google shifted its Edge AI strategy from a proprietary domain-specific architecture based on Google Coral Edge TPU to an open-source design using RISC-V.
Google’s research team has come up with “Kelvin,” a RISC-V standards-based design for an ML accelerator for 32 bit low-power embedded systems. And this IP block/base design is what Google partnered with Synaptics.
Synaptics is adding some of its knowledge to the Kelvin-based core complex, to offer “a very power efficient and interesting open hardware platform,” said Weil. Synaptics, however, is withholding details until an anticipated announcement in the fall.
Gambling on the future of Edge AI
One might describe Synaptics as an outlier among traditional MCU vendors in pursuit of the emerging edge AI market. McGregor at Tirias Research observed, “Leading MCU vendors “have spent more time on optimization for their specific platform, not for the open environment,” as he explained why the open-source approach isn’t a thing for them yet.
Clearly, companies with a big MCU development community of their own appear to believe that they will have a lot to lose by going open source. McGregor pointed out that such companies tend toward a two-part plan:
A) Let's get AI running on the device
B) Let's develop the tools to optimize those AI models to run better on our devices
Synaptics doesn’t see this strategy as effective in the edge AI market whose customers are looking for scalable solutions.
Does this mean that open-source is the be-all end-all answer?
Weil said, “Maybe not everything is perfect today.” But the open-source community will continue adding elements. He added that Synaptics is also “active in contributing our own knowledge.”
He concluded, “As we look at the future of AI, we see open source as the better investment … we can take a different path.’”
Related materials:
Below is the concluding episode of Edge AI video podcast series from “Junko’s Talk to Us” on YouTube:
In this concluding episode, we discuss:
Differences betwen ‘open-source software’ and ‘open hardware’
Why should anyone use RISC-V instead of Arm?
RISC-V brings flexibility
Can RISC-V make ‘Edge AI’ scalable?
AI will follow Linux’s trajectory, but move much faster
The preceding two episodes of this series are below:
Edge AI video podcast — Part 2
In this part 2 episode, we discuss:
Google’s shift from Coral to Kelvin
Open hardware based on RISC-V
How Google-Synaptics partnership differs from Meta-Arm alliance
Synaptics-Google Roadmap
Hardware-Software co-design in Machine Learning
Edge AI video podcast — Part 1
In this Part 1 episode, we discuss:
Why the first wave of IoT didn't quite live up to expectations
How AI at the Edge is solving that issue
Context-aware devices and why we need them
Synaptics & Google: The importance of hyperscaler collaboration and open source
Great conversation and great insights - really exciting how fast this stuff is evolving and I love the idea that open source is so key
Again, you have made a complicated topic comprehensible to low-tech slobs like me!