Moving Machine Learning off the Cloud Calls for eFPGAs

Authored By:
Alok Sanghavi
Sr. Marketing Manager

Posted On: Apr 10, 2018

Artificial intelligence is reshaping the world we live in and opening opportunities in commercial and industrial systems applications that range from autonomous driving and medical diagnostics to home appliances, industrial automation, adaptive websites and financial analytics. Next up is the communications infrastructure that links systems together, moving toward automated self-repair and optimization. For example, the U.S. Navy plans to expand its Consolidated Afloat Networks and Enterprise Services (CANES) ocean combat network with AI, connecting ships, submarines and on-shore naval stations.

The Nimitz-class aircraft carrier USS Carl Vinson

The Nimitz-class aircraft carrier USS Carl Vinson (CVN 70) transits the Pacific Ocean. (Photo courtesy of the U.S. Navy photo)

These new architectures will perform functions such as load balancing and allocating resources for wireless channels and network ports based on predictions learned from experience. Applications they support demand high performance and, in many cases, low latency to respond to real-time changes in conditions and demands. They also require power consumption to be as low as possible, rendering unusable solutions that underpin machine-learning in cloud servers where power and cooling are plentiful. A further requirement is for these embedded systems to be always on and ready to respond even in the absence of a network connection to the cloud.

This combination of factors calls for a change in the way hardware is designed.

A New Way to Design Hardware

FPGAs are a ready solution that offers flexibility close to that of a CPU with the efficiency approaching an ASIC. Like ASICs, FPGAs allow the designer to implement the algorithm in logic, delivering huge parallelism and a hardware-optimized solution. Unlike ASICs, FPGAs can be reprogrammed with a new design in the blink of an eye. Compared to CPUs or GPUs, today’s FPGAs are power efficient and able to provide many more operations per watt than processor-based solutions.

And yet, there is a more attractive solution that takes it a step further. Rather than simply advocating the use of a discrete FPGA, in a new twist, the FPGA architecture goes on-board a CPU or SoC to further increase performance. It’s an embedded FPGA (eFPGA).

Yes, an eFPGA removes the need to communicate chip-to-chip through bandwidth-limited connections such as PCI-Express. It eliminates the need for data to be serialized and de-serialized because it connects directly to the SoC through a large number of on-chip interconnects to the FPGA fabric. The result is latency performance 100× faster than discrete-chip FPGAs. Additionally, die size and power are cut by as much as 50%, and the overall cost cut by as much as 90%.

How does it work? Designers specify their logic, memory and DSP resource needs, and then the eFPGA IP is configured to meet their requirements. LUTs, RAM blocks and DSP64 blocks can be assembled like building blocks to create the optimal programmable fabric for any given application.

eFPGAs Lead the Way

Existing solutions such as multicore CPUs, GPGPU and standalone FPGAs support advanced AI algorithms such as deep learning, but can’t handle the increased demands designers are placing on hardware as their machine-learning architectures evolve. eFPGAs offer designers a route to faster, smaller, cheaper and more power-efficient solutions, allowing designers to continue to increase their compute in line with rapidly-escalating market requirements. Even the U.S. Navy sees the immense value in AI. Perhaps CANES will be driven by eFPGA technology.

For more info on Speedcore eFPGA IP, visit The Speedcore eFPGA