1281 Win Hentschel Blvd., Suite E1935
West Lafayette, IN 47906
FWDNXT provides efficient and high-performance hardware and software solutions based on deep learning and neural networks. FDWNXT products are centered on a machine-learning processor called Inference Engine, designed to accelerate deep learning application while delivering the highest possible computational efficiency, at the lowest-power consumption, the lowest memory bandwidth usage, and providing scalability from IoT devices to servers.
FWDNXT provides deep learning application services to train, deploy and productize neural-network based applications with the goal of providing the best performance per operation in class.
Inference Engine FPGA
FWDNXT Inference Engine is a custom processor designed to accelerate the execution of deep neural network models. Inference Engine implemented on FPGA is a modular and scalable solution for existing programmable devices.
Inference Engine SoC
FWDNXT Inference Engine is a custom processor designed to accelerate the execution of deep neural network models. Inference Engine implemented for SoC IPs is a modular and scalable solution for exist.
Our software development kit provides direct deployment from your favorite deep learning framework to your application.Our software takes trained neural network files from PyTorch, Caffe, TensorFlow, and compiles directly them into our accelerator, with no need for any programming.
Accelerate and deploy deep learning applications, along with providing software compiler and SDK support for FPGAs and hard SoC IPs.