FPGA
Connect with Intel® experts on FPGAs and Programmable Solutions
218 Discussions

Intel and MathWorks Utilize Artificial Intelligence to Reduce Fronthaul Traffic in 5G RAN

Mike_Fitton
Employee
1 0 1,862

High-speed, low-latency wireless communication is essential in today's hyperconnected 5G world. 5G networks generate vast amounts of data that must be synchronously transported between the radio and baseband across a fiber optic cable. This data, in turn, demands a massive network of fiber optic connectivity, resulting in significant network CAPEX costs.

In 5G radio access networks (RANs), artificial intelligence (AI) can compress data exchanged between the radio and base station. Specifically, Intel and MathWorks use AI-based autoencoders to compress channel state information data to reduce fronthaul traffic. Not only does AI-enabled compression bring user data integrity, but these compression algorithms also maintain a communication system's reliability and performance standards. Furthermore, implementing CSI compression and autoencoders has several benefits, including data reduction and lower block error rate. This allows mobile network operators to reduce fronthaul traffic and decrease the overall fronthaul bandwidth requirements, ultimately saving costs.

MathWorks and Intel have enabled this AI-powered fronthaul compression technique using MathWorks’ MATLAB* and Simulink*, DSP Builder for Intel® FPGAs, Intel Quartus® Prime design software, Intel FPGA AI Suite, and OpenVINO™ toolboxes targeting Intel SoC FPGAs. With Intel's cutting-edge tools and technologies, hardware developers can seamlessly integrate AI into 5G RAN using Intel FPGAs along with MathWorks’ 5G Toolbox* and Deep Learning Toolbox*.

MathWorks: Simplifying Wireless and AI Development for Intel FPGAs

MathWorks’ Deep Learning Toolbox offers algorithms, pre-trained models, and apps for designing and implementing deep neural networks. MATLAB functions and Simulink blocks can be converted directly into HDL or Verilog for 5G wireless applications using DSP Builder for Intel FPGAs. Simulink includes libraries for simulation and hardware deployment, and DSP Builder blocks can be leveraged in Simulink to generate HDL code optimized for Intel FPGAs.

Intel FPGA AI Suite: Ease of Use

The Intel FPGA AI Suite was developed for ease-of-use AI inference applications on Intel FPGAs. The suite toolflow combines MathWorks’ Deep Learning Toolbox and the Intel FPGA AI Suite to enable FPGA designers, machine learning engineers, and software developers to design and implement optimized FPGA AI applications. Libraries in the Intel FPGA AI Suite speed up FPGA development time for AI inference using familiar and popular industry frameworks such as TensorFlow* or PyTorch* and the OpenVINO toolkit while maintaining robust and proven FPGA development flows built within the Intel Quartus Prime design software IDE. The Intel FPGA AI Suite works with the OpenVINO toolkit, an open-source project to optimize inference on various hardware architectures. The OpenVINO toolkit takes deep learning models from all the major deep learning frameworks, such as TensorFlow, PyTorch, and Keras*, and optimizes them for inference on a variety of hardware architectures, including various CPUs, CPU+GPU, and FPGAs.

Technical Insights

We invite you to review the detailed documentation here to learn more about the Intel FPGA AI Suite, inference development flow, and model optimization. To explore the MathWorks CSI Feedback with Autoencoders example, visit www.mathworks.com/help for additional Communication and AI reference designs and examples.