Rescuing Edge AI from a ‘Tangled Mess’
Google and Synaptics early this year announced their Edge AI collaboration. Here's their candid assessement of what led them to partner, and the pioneering work they are doing on HW/SW codesign.
For more than a decade, the electronics industry has been promising massive growth in Edge AI.
As I explain in part 2 of my AI video podcast, this is still the industry’s aspiration. The podcast explores whether a partnership between Google and Synaptics, unveiled earlier this year, might bring this dream closer to coming true.
Here’s how the challenge stands today.
Adding “intelligence” to the edge is not just essential but imperative to effectively improve the interface between human users and machines.
However, it has not been easy to expand and accelerate development of Edge AI devices across a broader embedded market.
Little portability
On one hand, Google’s development of the edge Tensor Processor Unit (TPU)* ignited interest among chip vendors to develop lots of embedded AI accelerators, NPUs, SoCs and other variations.
[Editor’s note: *Google's TPUs are custom-designed AI accelerators, developed by Google to optimize machine learning workloads.]
On the other hand, as Billy Rutledge, systems research director at Google Research, acknowledged in our recent video podcast, this explosive industry-wide enthusiasm for proprietary AI solutions has resulted in “a tangled mess” for developers.
“If you're trying to build an AI model and bring it down to one of these edge devices, boy, you really have to jump through a lot of hoops with domain specific languages, different compiler tool chains … You kind of need to have a PhD and compiler technology to make your model actually run the way that you want it to, and that's multiplied across the different number of architectures that you want to support.”
In other words, there is virtually no portability. A particular AI model can’t run on different hardware. This forces edge AI developers to spend time optimizing for their specific platforms.
Can you 10X it?
The partnership with Synaptics, announced early this year, isn’t the first time Rutledge’s Google Research team has dipped a toe into the edge AI market.
Early in 2019, his team launched a project named “Coral.” Rutledge described Coral “a platform for building intelligent devices with local AI.” The goal was to help embedded device developers nurture their on-device AI ideas through a prototype-to-production process.
Rutledge’s candid assessment of Google’s own Coral project reveals why Google pivoted its strategy from building its own silicon to seeking a partnership with Synaptics.
After “graduating” Coral’s product portfolio into the industry via Asus Tech, Google Research leadership challenged Rutledge’s team to figure out how best to scale the [edge AI] project — “10X it” — going bigger and broader without fragmenting the market.
Rutledge’s team found several answers. First was “moving away from a proprietary, confidential domain-specific architecture,” followed by “stopping to make silicon ourselves.”
Google realized the need for “great partners,” and the significance of “open-source and open standards that let Google interact better with academics, with industry and with people across the ecosystem of Machine Learning developers,” Rutledge explained.
Google promotes not only open-source software and tools, but also open-source hardware based on RISC-V architecture.
Rutledge says Google has been “a key part of a lot of the different [RISC-V] extensions that have been ratified in the last couple of years, specifically around the math required to do machine learning.”
Google’s objective is to bring those extensions into reality with reference designs and architectures being built today and to work with partners like Synaptics “who can help Google turn them into real shipping silicon,” according to Rutledge.
Why work with Google?
For Synaptics, the partnership creates an opportunity for its technologies and products to bridge the “computing” happening in the cloud to edge AI devices like body-worn electronics. mobile phones and computers.
John Weil, VP & GM, IoT and Edge AI Processor at Synaptics, explained that historically, chip companies have always enabled different types of computing. “Imagine running some [AI] model in the cloud, some model on a PC or mobile product, or even smaller model on a deeply embedded device close to the human…At each step, we are adding another layer of compute with AI. Each of those devices can infer and make decisions about the surroundings differently.”
By teaming with Google, Synaptics will then be able to explore how best to “bring that intelligence to price performance, power consumption setup that allows you to open new markets,” according to Weil.
Different from Meta-Arm relationship
Rutledge and Weil believe their partnership is unique because there aren’t many hyperscaler-silicon marriages out there.
An exception is the Meta-Arm partnership. The companies have boasted that their union will impact a plethora of devices.
Google’s Rutledge made clear, “What's different between our work with Synaptics and the industry’s other arrangement is that everything we're doing is open source and license free.”
By following an Open Hardware paradigm, he noted, “We can get others to participate and contribute to our designs and our goals, while they can also take it and use it in an open way.”
Further, Synaptics’ Weil added, “The difference also lies in the speed of execution.”
With a partnership built on open hardware, he explained, “From a roadmap point of view, we can try and implement things at a much faster pace than some of the proprietary technologies that are out there.”
Closer collaboration
This podcast with Google’s Rutledge and Synaptics’ Weil reveals more than a business-as-usual partnership. The companies take pride in “pioneering” what they believe is a new level of hardware and software co-design.
Rutledge summed up.
“Our team at Google research is excited because we are be able to co-design our underlying hardware architecture with the ML engineers on a day-to-day basis, test things through simulation and emulation in our labs, and then settle on what is the next real thing that makes sense, and then try to sell [Synaptics’] John [Weil] on putting it into the roadmap … so that we can show where we might go next year.”
In this episode, we discuss:
• Google’s shift from Coral to Kelvin
• Open hardware based on RISC-V
• How the Google-Synaptics partnership differs from Meta-Arm
• Synaptics-Google Roadmap
• Hardware-Software co-design in Machine Learning
*************************************
This interview is based on the part 2 of the 3-Part Edge AI video podcast series produced by ‘Junko’s Talk to Us” in partnership with Synaptics.
Part 1: Let’s Start with ‘Why’
Part 2: Partnership with a Hyperscaler
Coming soon:
Part 3: Edge AI: Vision, Promise, and Forecast