Achronix is rolling out embedded FPGA IP designed as an AI accelerator for SoCs. Its CEO lays out FPGA’s specific AI advantages, explains why FPGAs are now coming into focus after years of GPU’s dominance on AI market.
Achronix Semiconductor Corp. is rolling out Speedcore Gen4m, its new generation of embedded FPGA IP designed as an AI accelerator to be built into SoCs.
Seeking more efficient data acceleration, Achronix’ Speedcore Gen4 targets a broader set of applications including computing, networking and storage systems for packet processing and interface protocol bridging/switching. But the Gen4’s shiniest feature, added to its architecture by Achronix, are Machine Learning Processor (MLP) blocks.
By adding MLP to the library of available blocks, Achronix claims that Speedcore Gen 4 — designed for 7nm process technology — “delivers 300% higher system performance for artificial intelligence and machine learning applications,” compared to Achronix’ own 16nm Speedcore.
“MLP blocks are highly flexible, compute engines tightly coupled with embedded memories to give the highest performance/watt and lowest cost solution for AI/ML applications,” Achronix said.
Why AI?
These days, there is hardly a single chip company CEO not coveting the AI market.
Robert Blake
Robert Blake, president and CEO of Achronix, however, told EE Times that he became aware almost 20 years ago of the AI potential of FPGAs. When he first met Anna Patterson, at that time working at Google on search engine algorithms, Blake said it dawned on him that massive parallelism would be the key to functions like page ranking. “I remember thinking that something like FPGAs have a significant upside.”
With Patterson focused on software then, and Blake on hardware, “we could not cross the divide at that time. But I had the recognition of AI, early on,” he said.
Of course, Blake is not saying that FPGA is the only solution for AI/Machine Learning. Acknowledging a spectrum of solutions for AI accelerators — ranging from CPUs, GPUs to FPGAs and ASICs — Blake said, “This market is growing so fast, so all of these different solutions will see an upside.”
Compared to CPUs that offer maximum flexibility, ASIC’s equal and opposite strength is efficiency. “But the question with ASIC is, can you retain flexibility to do different workloads?” Blake asked. Among the challenges of the next five to ten years, he noted, are “workloads we’d like to accelerate and analytics we’d like to do on these massive data sets we are collecting.”