Connect with Intel® experts on FPGAs and Programmable Solutions
214 Discussions

Intel® FPGA AI Suite melds with OpenVINO™ toolkit to generate heterogeneous inferencing systems

6 0 5,766

Intel developed the FPGA AI Suite to reduce the effort needed to create powerful, heterogeneous AI deep learning (DL) inferencing systems using Intel® CPUs, Intel® SoC FPGAs, and Intel® FPGAs. The Intel® FPGA AI Suite enables FPGA designers, machine learning engineers, and software developers to efficiently create and optimize AI inferencing platforms to meet best-in-class performance, power, and cost goals. Utilities incorporated into the Intel FPGA AI Suite speed the development of FPGA-based AI inference engines. Data scientists and engineers start with any familiar industry frameworks such as TensorFlow and PyTorch to train and converge inferencing models. Then, the Intel® Distribution of OpenVINO™ toolkit optimizes the inferencing model’s performance and power by minimizing the logic and reducing the memory footprint. The optimization is performed across a heterogeneous mix of CPU and FPGA. The Intel FPGA AI Suite then further optimizes the FPGA portion of the resulting design and the optimized design feeds directly into robust and proven FPGA development flows using the Intel® Quartus® Prime Software.

Stryker Corporation’s R&D team evaluated Intel FPGA AI Suite, and concluded:

  • “The ease-of-use of Intel FPGA AI Suite and OpenVINO toolkit enabled Stryker to develop optimized FPGA IP for deep learning inference. The inference IP was successfully integrated into an Intel FPGA using the Intel Quartus Prime Software. The example designs provided with the suite enabled the team to quickly evaluate different algorithms for different image sources.
  • Intel FPGA AI Suite and OpenVINO toolkit enable data scientists and FPGA engineers to seamlessly work together to develop optimized deep learning inference for medical applications.”

You can use the Intel Distribution of OpenVINO toolkit and the Intel FPGA AI Suite to develop inferencing systems for everything from DL-enhanced embedded systems all the way to advanced data center application workloads that run on multiple FPGA-accelerated servers. If you would like to develop FPGA-accelerated DL inferencing models for detection, classification, segmentation, or related tasks, you can do that with the OpenVINO toolkit and the Intel FPGA AI Suite. If you want to stream video efficiently while using FPGAs to encode or decode video, or to process images for computer vision, you can do that with the Intel Distribution of OpenVINO toolkit and the Intel FPGA AI Suite as well. (FPGAs are particularly well suited for this sort of pre- and post-processing.) If you don’t have an inferencing model of your own to train but you can’t wait to dip your toes in the deep (learning) water, the Intel Distribution of OpenVINO toolkit includes pre-trained inferencing models for widely used neural networks that you can use to hit the ground running.

Figure 1 shows the development flow for an inferencing system based on this set of tools.


Figure 1: Intel FPGA AI Suite development flow

The Intel FPGA AI Suite currently supports Intel® Agilex™ FPGAs, Intel® Cyclone® 10 GX FPGAs, and Intel® Arria® 10 FPGAs with a roadmap to support additional Intel FPGAs in the future. Of course, there’s also plenty of Intel online support behind these tools as well.

For more information about the Intel FPGA AI Suite including explanations of the four main hardware topologies that you can use to build heterogeneous AI inferencing systems with these development tools, click here.