Steve Mensor, Vice President, Marketing
August 1, 2017
The data center market, as many would observe, is now a major commercial sweet spot for the tech sector, with positive revenue growth; a hotbed for innovation. But the industry has a number of hurdles. Data centers now have to deal with oceans of unstructured data. Over the years data centers have moved away from custom to commodity hardware in order to maintain scalability, redundancy and low costs, while achieving appropriately high port counts, low power consumption and high performance.
Flexibility is critical because every task is different. The widespread adoption of Intel’s x86 Xeon CPUs was a step in the right direction. These have offered data centers just enough software-level personalization to be flexible, while meeting performance requirements to-date, and with the attendant low-cost of falling within the sphere of commoditized hardware.
However, data center architects are beginning to find that off-the-shelf hardware is not sufficient for the increasingly stringent demands placed upon their systems, particularly involving networking.
CPUs handle admin and protocol tasks admirably, overseeing networking control plane applications. But, with designed-in generalism and the accompanying high overheads, they lack the performance and efficiency to support the packet-based processing requirements of layers four and below. CPUs are great at operating with the word/block-oriented data structures found in the transport, network and higher layers, but poor at handling the bit-intensive tasks of the lower OSI layers.
So data center architects face an issue: The use of commodity CPUs in addressing data center bottlenecks is no longer meeting requirements in terms of performance, cost, power draw and scalability. How can they redress this without invoking specialist, hard-wired processors with hardware accelerators?
One answer to this problem is the use of high-performance FPGAs in a NIC solution. A NIC based around a high-performing, configurable hardware acceleration engine can support many bit-intensive tasks in a flexible way, while retaining many of the advantages, moment-to-moment, of dedicated solutions. It’s arguably the perfect blend, with the Achronix PCIe Accelerator-6D NIC board is a prime example.
By integrating a programmable NIC based around an appropriately high-performing FPGA, data center architects can, for example, unburden CPUs of laborious memory accesses and pipeline executions, and directly address system memory for protocol stack processing and physical layer transactions. At the moment, the use of low-cost generic NICs to transfer the processing load for remote DMA to the system software stack is negatively impacting system performance and power.
Of course we were not the first company to think of applying flexible logic in such a way. But as with everything, speed and capability are important. However, no other flexible logic on the market has the performance to accommodate the memory bandwidth and memory density that Achronix HD1000 can. It offers a multitude of hardened cores for memory management and L1/L2 Ethernet functions — six DDR3 controllers, two 10/40/100G Ethernet MACs and two PCIe Gen 3 controllers.
With such a high-performing chip at its heart, our recent NIC implantation has been able to use ROCE/WARP to entirely bypass the CPU overhead for east-west transactions between servers, while supporting standard networking and tunneling protocols for more conventional north-south communications. It can also accommodate 40 Gbps of DDR bandwidth and 64 Gbps of PCIe bandwidth. This capacity is right-sized to support 40 GE network function virtualization (NFV) applications and high?performance Open vSwitch (OVS) offload.
High-performance FPGA technology will be at the heart of the next-generation appliances intended to help data centers optimize their utilization of the miles of hardware racks that they have invested in without the need for a wholesale customization of existing architectures.