Intel® Distribution of OpenVINO™ Toolkit
Community support and discussions about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all things computer vision-related on Intel® platforms.

INT8 optimization for YOLOv4

Rahila_T_Intel
Moderator
809 Views

I was trying to do POT optimization for yolov4 model.

Initially converted the yolov4 weights to .pb file and then created IR files using openvino.

While running the model optimizer to get the IR files even the bin/xml is successfully get generated I am getting 2 errors. I am trying with the latest version of OpenVINO(2021.1.110).

Please find the command and output below:

Command used : python  /openvino_2021.1.110/deployment_tools/model_optimizer/mo_tf.py --input_model frozen_darknet_yolov3_model.pb --tensorflow_use_custom_operations_config /openvino_2021.1.110/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3.json --batch 1

Output : /openvino_2021.1.110/deployment_tools/model_optimizer/mo/middle/passes/fusing/decomposition.py:69: RuntimeWarning: invalid value encountered in sqrt

  scale = 1. / np.sqrt(variance.data.get_value() + eps)

/openvino_2021.1.110/deployment_tools/model_optimizer/extensions/ops/elementwise.py:100: RuntimeWarning: invalid value encountered in multiply

  operation = staticmethod(lambda a, b: a * b)

[ ERROR ]  10 elements of 64 were clipped to infinity while converting a blob for node [['data_add_1395913964/copy_const']] to <class 'numpy.float32'>.

For more information please refer to Model Optimizer FAQ, question #76. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?q...)

[ ERROR ]  118 elements of 18432 were clipped to infinity while converting a blob for node [['detector/darknet-53/Conv_1/BatchNorm/FusedBatchNorm/mean/Fused_Mul_1513315135_const']] to <class 'numpy.float32'>.

For more information please refer to Model Optimizer FAQ, question #76. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?q...)

 

[ SUCCESS ] Generated IR version 10 model.

[ SUCCESS ] XML file: ./frozen_darknet_yolov3_model.xml

[ SUCCESS ] BIN file:  ./frozen_darknet_yolov3_model.bin

[ SUCCESS ] Total execution time: 26.80 seconds.

[ SUCCESS ] Memory consumed: 1715 MB.

I ran the Benchmark Tool to get the performance status of the FP 32 model in the Openvino format.

[Step 11/11] Dumping statistics report
Count: 4200 iterations
Duration: 60207.01 ms
Latency: 139.37 ms
Throughput: 69.76 FPS

While trying the execute a basic INT8 default quantization, there are NaN values.

File "/openvino_2021.1.110/deployment_tools/model_optimizer/extensions/ops/Cast.py", line 58, in infer

new_blob, finite_match_count, zero_match_count = convert_blob(node.in_node(0).value, dst_type)

File "/openvino_2021.1.110/deployment_tools/model_optimizer/mo/middle/passes/convert_data_type.py", line 97, in convert_blob

blob, dst_type))

mo.utils.error.Error: The conversion of blob with value "[[[[nan]]

 

[[nan]]

 

[[nan]]

 

...

 

[[nan]]

 

[[nan]]

 

[[nan]]]

 

 

[[[nan]]

 

[[nan]]

 

[[nan]]

 

...

 

[[nan]]

 

[[nan]]

 

[[nan]]]

 

 

[[[nan]]

 

[[nan]]

 

[[nan]]

 

...

 

[[nan]]

 

[[nan]]

 

[[nan]]]

 

 

...

 

 

[[[nan]]

 

[[nan]]

 

[[nan]]

 

...

 

[[nan]]

 

[[nan]]

 

[[nan]]]

 

 

[[[nan]]

 

[[nan]]

 

[[nan]]

 

...

 

[[nan]]

 

[[nan]]

 

[[nan]]]

 

 

[[[nan]]

 

[[nan]]

 

[[nan]]

 

...

 

[[nan]]

 

[[nan]]

 

[[nan]]]]" to dst_type "<class 'numpy.uint8'>" results in rounding
 
Could you please help to resolve the issue.
 
Labels (2)
0 Kudos
1 Solution
Rahila_T_Intel
Moderator
691 Views

Hi Adli,

 

The issue was with the custom trained yolov4 model. 

When I tried with customer trained latest weight file , I was able to generate the bin/xml without any errors. Also I am able to optimize the model using POT.  

 

Regards,

Rahila T

View solution in original post

5 Replies
IntelSupport
Community Manager
767 Views

Hi Rahali,

 

Thank you for reaching out to us. Did you use a custom trained YoloV4 weight? If yes, could you share the file here?

 

If no, could you share any commands/files that you used to execute INT8 default quantization?


We would like to replicate this issue on our end. Thank you.

 

Regards,

Adli


Rahila_T_Intel
Moderator
756 Views

Hi,

I am using a custom trained YoloV4 weight. So that I have limitations to share it in a public forum.

 

Please let me know is there any way to connect with you internally. 

 

Thanks,

Rahila T

IntelSupport
Community Manager
734 Views

Hi Rahali,

 

Regarding the YOLO V4 model, what is the application of the model do? Is it for Classification?

 

For your information, the current release of the OpenVINO toolkit does not officially support YOLO V4. However, if you still need our help, you can share the custom YOLO V4 weights via private message/email.

 

Regards,

Adli


Rahila_T_Intel
Moderator
692 Views

Hi Adli,

 

The issue was with the custom trained yolov4 model. 

When I tried with customer trained latest weight file , I was able to generate the bin/xml without any errors. Also I am able to optimize the model using POT.  

 

Regards,

Rahila T

View solution in original post

Adli
Moderator
687 Views

Hi Rahila,


This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.


Regards,

Adli


Reply