Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

OpenVINO 2022.1 POT Failed for onnx model

WT_Jacko
Beginner
2,715 Views

Hi SIRs,

 

Below is my BKC :

OS : windows 11

OpenVINO : 2022.1

 

I'm using OpenVINO version 2022.1, my model is onnx, but the process of converting to INT8 produces an error message.

 

There are two files in the attachment, one is the json config file I use, and the other is the complete error log.

 

Thanks

 

 

 

 

0 Kudos
1 Solution
Peh_Intel
Moderator
2,659 Views

Hi Jacko,


I was able to generate INT8 model using POT after re-convert the ONNX into Intermediate Representation (IR) with the following command:

mo --input_model model_final.onnx --output Conv_324,Conv_403,Conv_482 --reverse_input_channels


If your model is trained with images in BGR order, then do not need to perform RGB<->BGR conversion by specifying the command-line parameter: --reverse_input_channels.


Here is the tutorial about converting YOLOv5 ONNX weights file to IR by searching the keyword “Transpose” in Netron and then get the convolution node to specify the model optimizer parameters:

https://github.com/bethusaisampath/YOLOv5_Openvino

 

 

Regards,

Peh


View solution in original post

0 Kudos
13 Replies
Peh_Intel
Moderator
2,682 Views

Hi Jacko,


Thanks for reaching out to us.


Your JSON config file looks fine, and it seems like the errors occurred due to incorrect mapping of model outputs.


We like to request your models (ONNX and IR models) and dataset (POT usage) for further investigation. Besides, please also share the Model Optimizer conversion command with us.



Regards,

Peh


0 Kudos
WT_Jacko
Beginner
2,677 Views

hi Peh,

ok, Please see my attachments(models and dataset), can use link to download:https://drive.google.com/file/d/1Zkeo7RxlO2ljQ7xmEp0D4vOIwXOZgknA/view?usp=sharing

 

About mo command : mo --input_model <INPUT_MODEL>.onnx

 

 

Thanks your help.

Jacko

 

 

0 Kudos
WT_Jacko
Beginner
2,569 Views

Hi Peh,

 

Thanks your big support!!

 

Jacko

0 Kudos
Peh_Intel
Moderator
2,660 Views

Hi Jacko,


I was able to generate INT8 model using POT after re-convert the ONNX into Intermediate Representation (IR) with the following command:

mo --input_model model_final.onnx --output Conv_324,Conv_403,Conv_482 --reverse_input_channels


If your model is trained with images in BGR order, then do not need to perform RGB<->BGR conversion by specifying the command-line parameter: --reverse_input_channels.


Here is the tutorial about converting YOLOv5 ONNX weights file to IR by searching the keyword “Transpose” in Netron and then get the convolution node to specify the model optimizer parameters:

https://github.com/bethusaisampath/YOLOv5_Openvino

 

 

Regards,

Peh


0 Kudos
WT_Jacko
Beginner
2,652 Views
Hi peh,

May I know what is your pot command? And what is your json file? Are all same as me?

Thanks
Jacko
0 Kudos
Peh_Intel
Moderator
2,576 Views

Hi Jacko,


Sorry for my late response.


Yes, I am using the same JSON file. My pot command is:

 

pot -c Solomon-int8.json

 

 

Regards,

Peh


0 Kudos
WT_Jacko
Beginner
2,559 Views

Hi Peh,

 

Sorry about for another relative problems.

 

May I know if OpenVINO POT command can support nms or ROI Align etc post processing?

because for your suggestions, we add --output parameter in mo command. that will do cutting off parts of a model.

The generated IR files will not include the postprocessing behind the original model.

 

Thanks

Jacko

 

 

 

 

 

 

0 Kudos
Peh_Intel
Moderator
2,549 Views

Hi Jacko,

 

Here is an example that defines preprocessing and postprocessing in the Config file.

 

Besides, you may refer to this tutorial (YOLOv5 Model INT8 Quantization based on OpenVINO™ 2022.1 POT API) that shows how to implement the customized quantization pipeline based on the POT API in the following steps:

 

1)     Create YOLOv5 DataLoader Class: Define data and annotation loading and pre-processing

2)     Create COCOMetric Class: Define the model post-processing and accuracy calculation method

3)     Set the quantization algorithm and related parameters, define, and run the quantization pipeline

 

 

Hope this helps.

 

 

Regards,

Peh

 

0 Kudos
WT_Jacko
Beginner
2,525 Views

Hi Peh,

 

This INT8 model is generated using "mo --input_model model_final.onnx --output Conv_324,Conv_403,Conv_482 --reverse_input_channels". So the postprocsessing-related subsequent processing is not in the INT8 model.

So the output from the INT8 model I use a python function to do manual post-processing later, but this causes DETECT to get nothing.


So I compared the value of the feature map (array) of the two and found that there is a lot of difference. Is this normal?

The attachments are the matrix of the model output. The values on both sides are much different.

Does the INT8 model need to do a certain mathematical mapping before it can be calculated normally?

The picture below is a 18900*8 matrix in the attachment, I only cut a small part of it, and we can see that there is a big difference between the two sides.

WT_Jacko_0-1665632818712.png

 

Besides,  I have reference to the link you shared last time: "https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/YOLOv5-Model-INT8-Quantization-based-on-OpenVINO-2022 -1-POT-API/post/1415310” to use configure file to do POT, so the generated INT8 model will contain postprocssing. Unfortunately the POT failed. ( Error: AssertionError: Incorrect mapping then_graph outputs with Result_6617 outputs! Can't find port with ID 4 in If operation.)

 

Please see my attachments(models , dataset and json, yml), can use link to download: https://www.dropbox.com/s/floxyg81o5a4psq/attachment_20221013.7z?dl=0

 

My  mo command : mo --input_model <INPUT_MODEL>.onnx

My pot command : pot -c Solomon-int8-yolov5.json

 

Thanks

Jacko

0 Kudos
Peh_Intel
Moderator
2,188 Views

Hi Jacko,

 

Please try inferencing with the optimized model again.  

 

1.      Convert ONNX model by cutting off parts.

mo --input_model model_final.onnx --output Conv_324,Conv_403,Conv_482

 

2.      Get output name of XML file with Python to define in YAML file.

from openvino.inference_engine import IECore

ie = IECore()

model_xml='model_final.xml'

model_bin=’model_final.bin'

net = ie.read_network(model_xml,model_bin)

for name, info in net.outputs.items():

print("\tname: {}".format(name))

 

 

3.      Define preprocessing and postprocessing in YAML file.

<attach modified_yml>

 

 

4.      Run POT.

pot -c Solomon-int8-yolov5.json

 

 

 

Regards,

Peh

 

0 Kudos
WT_Jacko
Beginner
2,125 Views

Hi Peh,

 

For your suggestion, i can produce the INT8 model successfully. 

But why the result of my final INT8 model is wrong,  you can check the picture as below.  The output has three shapes that are
(1,24,60,80)(1,24,30,40)(1,24,15,20), I can be sure that it is just a feature map from the yolo backbone, and no post-processing is performed.

WT_Jacko_0-1665733651810.png

 

The other picture as below, This is the IR file result that I converted with command : "mo --input_model <INPUT_MODEL>.onnx", the result is expected like as (87,)(87,)(87,4).  The result is to do NMS.

WT_Jacko_1-1665733963797.png

 

Is there any chance directly POT IR file is converted with command "mo --input_model <INPUT_MODEL>.onnx"?

 

Thanks your support

Jacko

 

0 Kudos
Peh_Intel
Moderator
2,020 Views

Hi Jacko,


To apply POT, it is required to have a floating-point precision model, FP32 or FP16, converted into OpenVINO Intermediate Representation (IR) format that can be run on CPU.


Could you share your inferencing script with us that used to validate whether the INT8 model is correct or wrong? Besides, we also would like to know the number of classes that used to train the model. If possible, please also share the source of the model so that we can define appropriate parameters in the config file.



Regards,

Peh


0 Kudos
Peh_Intel
Moderator
1,821 Views

Hi Jacko,


We have not heard back from you. If you need any additional information from Intel, please submit a new question as this thread is no longer being monitored.



Regards,

Peh


0 Kudos
Reply