Application Acceleration With FPGAs
Programmable Acceleration Cards (PACs), DCP, FPGA AI Suite, Software Stack, and Reference Designs
504 Discussions

Intel FPGA AI suite input data type

RubenPadial
New Contributor I
6,313 Views

Hello,

I'm using Intel FPGA AI 2023.2 on ubuntu 20.04 host computer and trying to infer a custom CNN in a Intel Arria 10 SoC FPGA.

I have followed Intel FPGA AI Suite SoC Design Example Guide and I'm able to copile the Intel FPGA AI suite IP and run the M2M and S2M examples.

I have a question regarding the input layer data type. In the example, resnet-50-tf NN, the input layer seems to be FP32 [1,3,224,224] in the .dot file obtained after the graph compiling (see screenshot below)

RubenPadial_0-1701717505530.png

 

However, when running dla_benchmark example I noticed that U8 input data type was detected. After reading the .bin file of the graph compiled, from which this info is extracted by the dla_benchmark application, it can be seen that input data type is U8 (see screenshot below)

RubenPadial_1-1701717533938.png

 

The graph was compitled according to Intel FPGA AI Suite SoC Design Example Guide using:

omz_converter --name resnet-50-tf \
--download_dir $COREDLA_WORK/demo/models/ \
--output_dir $COREDLA_WORK/demo/models/

In addition I compiled my custom NN with dla_command and I get the same result: the input layer is fp32 in the .dot and u8 in the .bin.

dla_compiler
--march "{path_archPath}"
--network-file "{path_xmlPath}"
--o "{path_binPath}"
--foutput-format=open_vino_hetero
--fplugin "HETERO:FPGA,CPU"
--fanalyze-performance
--fdump-performance-report
--fanalyze-area

--fdump-area-report

 

Compilation was performed for A10_Performance.ach example architecture architecture in both cases.

In addition, input precission should be reported in input_transform_dump report but it is not.

RubenPadial_1-1701762768293.png

 

2 questions:
-Why is the input data_type changed?

- Is it possible to compile the graph with input data type FP32?

 

Thank you in advance

Labels (1)
0 Kudos
23 Replies
RubenPadial
New Contributor I
953 Views

Hello@JohnT_Intel,

I prefer to manage the issues in different threads as they seem to have different root causes.

0 Kudos
RubenPadial
New Contributor I
944 Views

Hello @JohnT_Intel ,

Is there any news? The same issue is pressent when importing from ONNX model.

ONNX is imported to IR model and then Input layer is detected as U8 layer in the compiled graph. It should be FP32 instead.

 

0 Kudos
RubenPadial
New Contributor I
909 Views

Hello,

It seems fixed in Intel FPGA AI Suite 2023.3.

In Intel FPGA AI Suite 2023.2 input datatype is not modified by using "--bin-data" dla_compiler flag.

0 Kudos
Reply