Application Acceleration With FPGAs
Programmable Acceleration Cards (PACs), DCP, FPGA AI Suite, Software Stack, and Reference Designs
477 Discussions

HardSigmoid and Intel FPGA AI suite

RubenPadial
New Contributor I
2,197 Views

Hello,

I have a trained neural network with PyTorch. I exported it in ONNX format and successfully obtained the IR model using the OpenVino toolkit. The model includes a HardSiLU (or H-Swish) activation layer, which is defined as x·hardsigmoid(x). I compiled the graph using the Intel FPGA AI Suite. The x·hardsigmoid(x) operation has been correctly identified but the problem is that, even though H-Sigmoid and H-Swish layers are compatible with Intel FPGA AI suite according to Intel  FPGA AI Suite IP Reference Manual it seems that the HardSigmoid operation is being executed on the CPU instead of the FPGA. Is it possible that the layer has not been properly recognized by the Intel FPGA Suite? Why is it being executed on the CPU rather than the FPGA?

RubenPadial_0-1693988826896.png

 

--

OpenVino version: 2022.3

Intel FPGA AI Suite version: 2023.2

Device: Intel Arria 10 SoC FPGA

OS: Ubuntu 20.04

Labels (1)
0 Kudos
18 Replies
JohnT_Intel
Employee
2,171 Views

Hi,


Can you share with me how you build your AI Suite FPGA bitstream? Or are you using pre-generated FPGA bitstream?


0 Kudos
RubenPadial
New Contributor I
2,156 Views

Hello @JohnT_Intel 

To compile the graph I used the example architecture "A10_Generic.arch" with dla_compiler function:

dla_compiler
--march "{path_archPath}"
--network-file "{path_xmlPath}"
--o "{path_binPath}"
--foutput-format=open_vino_hetero
--fanalyze-performance

 

where "path_archPath" is the path to the "A10_Generic.arch", "path_xmlPath" is the path to the IR model xml file and "path_binPath" is the output directory.

0 Kudos
JohnT_Intel
Employee
2,139 Views

Hi,


This is for the database generation and you are creating it to run in CP and FPGA (--foutput-format=open_vino_hetero). How do you confirm if the current AI Suite is able to run your model?


0 Kudos
RubenPadial
New Contributor I
2,134 Views

Hi @JohnT_Intel 

I'm sorry, but currently, I'm trying to understand Intel FPGA AI Suite documentation, get used to the workflow, and apply it to my own neural network, so maybe I'm a little confused.

At this moment, I'm not able to confirm whether this model can be run in an Intel FPGA AI Suite IP core. In the examples provided in the "Intel FPGA AI Suite: Getting Started Guide" and "Intel FPGA AI Suite SoC Design Example User Guide," the "--foutput-format=open_vino_hetero" option is used, and the model is run in a prebuilt core. I know this option makes the model run on both the CPU and FPGA, but I understand that only non-supported layers are run on the CPU.

The question is, why does the H-sigmoid layer, which should be compatible with Intel FPGA AI Suite according to the "Intel FPGA AI Suite IP Reference Manual," seems to be executed on the CPU after compiling the graph?

0 Kudos
JohnT_Intel
Employee
2,125 Views

Hi,


The H-Sigmoid should be possible if you are compiling the FPGA design. Can you confirm if you have AI Suite license? If no then you might have limited model that you can test it out..


0 Kudos
RubenPadial
New Contributor I
2,124 Views

Hi @JohnT_Intel 

Do you mean that the compiling and performance estimation process is different when using the Intel FPGA AI Suite compared to not using it?

Based on the information provided in the "Intel FPGA AI Suite: Getting Started Guide" and the "Intel FPGA AI Suite SoC Design Example User Guide," I understand that a license is needed to build the Intel FPGA AI Suite core IP bitstream, but compiling the graph does not require a license.

To test my own model without a license, can I compile it with the "A10_Performance.arch" files, as demonstrated in the examples, and test it with the prebuilt SD card image?

 

 

0 Kudos
JohnT_Intel
Employee
2,108 Views

Hi,


Please refer to "fpga-ai-suite-compiler-reference-manual.pdf" Chapter 3.


0 Kudos
RubenPadial
New Contributor I
2,101 Views

Hi @JohnT_Intel,

 

I'm sorry, but the "Intel FPGA AI Suite Compiler Reference Manual" in Chapter 3 explains how to obtain different performance and area estimations using the "dla_compiler" for a graph on a given architecture and how to optimize the architecture. However, it doesn't directly address my question regarding why the H-sigmoid layer appears to be implemented in software instead of hardware.

The issue may be related to the "--fplugin" parameter, which is optional. The documentation mentions that the typical value is "HETERO:FPGA,CPU," and I understand that I may be getting the same result without specifying it because I have layers implemented in both the CPU and FPGA, as shown in the screenshot from the first message, is that correct? I compiled the graph with "--fplugin HETERO:FPGA,CPU" and I got the same result.

Furthermore, it doesn't answer my question about how to test the model. I can obtain performance estimations correctly, but I'm still working on running the model in the FPGA AI Suite core IP.

0 Kudos
JohnT_Intel
Employee
2,055 Views

Hi,


It will depends on which FPGA bitstream that you are using. I suspect that the precompiled bitstream that you are using does not support H-Sigmoid.


0 Kudos
RubenPadial
New Contributor I
2,043 Views

Hello @JohnT_Intel,

I tried to compile the graph with several architectures from $COREDLA_ROOT/example_architectures/, but I get the same result. I attempted using A10/S10/AGX7_Generic/Performance.arch architecture files, and in none of them, H-Sigmoid is implemented in the FPGA. The "Intel FPGA AI Suite SoC Design Example User Guide" requires graph compilation with A10_performance.arch. It is understood that the bitstream must be compiled with that architecture file. However, I can't find any information in Intel's documentation regarding layer-architecture dependencies. In Intel FPGA AI Suite: IP Reference Manual Chapter 2.3, all the compatible layers are listed, but the architectures are not specified. That's why I'm asking; according to Intel documentation, the H-Sigmoid layer should be supported. could you confirm it?

0 Kudos
JohnT_Intel
Employee
2,027 Views

Hi,


The H-Sigmoid should be supported but not on the provided bitstream. May I know if you have the license to regenerate the FPGA bitrstream?


0 Kudos
RubenPadial
New Contributor I
1,985 Views

Hello  @JohnT_Intel,

But the NN graph is compiled with the architecture file, not with the bitstream. Should the architecture file be modified to enable H-Sigmoid compatibility?

0 Kudos
JohnT_Intel
Employee
1,985 Views

Hi,


Yes, the architecture need to be change in order to enable it. Please wait for the latest release which is schedule by this few weeks. It will be reflected in AN933


0 Kudos
RubenPadial
New Contributor I
1,980 Views

Hello @JohnT_Intel ,

Which new release are you referring to? Is it a new release of Intel FPGA AI Suite?

Are there any specified changes that need to be made in the architecture files as mentioned in the Intel documentation?

AN933 is "Updating Intel® Stratix® 10 FPGA Firmware" and I am working with Intel Arria 10 FPGA.

0 Kudos
JohnT_Intel
Employee
1,935 Views

Hi,


Sorry for the typo. It will be release in AN993. The new release will be coming in a few weeks time.


0 Kudos
JohnT_Intel
Employee
1,725 Views

Hi,


I am still checking with marketing to see when the updated AN993 will be available.


0 Kudos
RubenPadial
New Contributor I
1,092 Views

Hello @JohnT_Intel ,

This issue is still open. HardSigmoid is execued in CPU instead of FPGA. Why and how I cand solve it? Teste with A10_FP16_Performance architectured as well.

0 Kudos
RubenPadial
New Contributor I
1,017 Views

Fix misspelling "This issue is still open. HardSigmoid is executed in CPU instead of FPGA. Why and how I can

solve it? Tested with A10_FP16_Performance architectured as well."

0 Kudos
Reply