Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Employee
80 Views

INT8 optimization for YOLOv4

I was trying to do POT optimization for yolov4 model.

Initially converted the yolov4 weights to .pb file and then created IR files using openvino.

While running the model optimizer to get the IR files even the bin/xml is successfully get generated I am getting 2 errors. I am trying with the latest version of OpenVINO(2021.1.110).

Please find the command and output below:

Command used : python  /openvino_2021.1.110/deployment_tools/model_optimizer/mo_tf.py --input_model frozen_darknet_yolov3_model.pb --tensorflow_use_custom_operations_config /openvino_2021.1.110/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3.json --batch 1

Output : /openvino_2021.1.110/deployment_tools/model_optimizer/mo/middle/passes/fusing/decomposition.py:69: RuntimeWarning: invalid value encountered in sqrt

  scale = 1. / np.sqrt(variance.data.get_value() + eps)

/openvino_2021.1.110/deployment_tools/model_optimizer/extensions/ops/elementwise.py:100: RuntimeWarning: invalid value encountered in multiply

  operation = staticmethod(lambda a, b: a * b)

[ ERROR ]  10 elements of 64 were clipped to infinity while converting a blob for node [['data_add_1395913964/copy_const']] to <class 'numpy.float32'>.

For more information please refer to Model Optimizer FAQ, question #76. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?q...)

[ ERROR ]  118 elements of 18432 were clipped to infinity while converting a blob for node [['detector/darknet-53/Conv_1/BatchNorm/FusedBatchNorm/mean/Fused_Mul_1513315135_const']] to <class 'numpy.float32'>.

For more information please refer to Model Optimizer FAQ, question #76. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?q...)

 

[ SUCCESS ] Generated IR version 10 model.

[ SUCCESS ] XML file: ./frozen_darknet_yolov3_model.xml

[ SUCCESS ] BIN file:  ./frozen_darknet_yolov3_model.bin

[ SUCCESS ] Total execution time: 26.80 seconds.

[ SUCCESS ] Memory consumed: 1715 MB.

I ran the Benchmark Tool to get the performance status of the FP 32 model in the Openvino format.

[Step 11/11] Dumping statistics report
Count: 4200 iterations
Duration: 60207.01 ms
Latency: 139.37 ms
Throughput: 69.76 FPS

While trying the execute a basic INT8 default quantization, there are NaN values.

File "/openvino_2021.1.110/deployment_tools/model_optimizer/extensions/ops/Cast.py", line 58, in infer

new_blob, finite_match_count, zero_match_count = convert_blob(node.in_node(0).value, dst_type)

File "/openvino_2021.1.110/deployment_tools/model_optimizer/mo/middle/passes/convert_data_type.py", line 97, in convert_blob

blob, dst_type))

mo.utils.error.Error: The conversion of blob with value "[[[[nan]]

 

[[nan]]

 

[[nan]]

 

...

 

[[nan]]

 

[[nan]]

 

[[nan]]]

 

 

[[[nan]]

 

[[nan]]

 

[[nan]]

 

...

 

[[nan]]

 

[[nan]]

 

[[nan]]]

 

 

[[[nan]]

 

[[nan]]

 

[[nan]]

 

...

 

[[nan]]

 

[[nan]]

 

[[nan]]]

 

 

...

 

 

[[[nan]]

 

[[nan]]

 

[[nan]]

 

...

 

[[nan]]

 

[[nan]]

 

[[nan]]]

 

 

[[[nan]]

 

[[nan]]

 

[[nan]]

 

...

 

[[nan]]

 

[[nan]]

 

[[nan]]]

 

 

[[[nan]]

 

[[nan]]

 

[[nan]]

 

...

 

[[nan]]

 

[[nan]]

 

[[nan]]]]" to dst_type "<class 'numpy.uint8'>" results in rounding
 
Could you please help to resolve the issue.
 
0 Kudos
3 Replies
Highlighted
Community Manager
38 Views

Hi Rahali,

 

Thank you for reaching out to us. Did you use a custom trained YoloV4 weight? If yes, could you share the file here?

 

If no, could you share any commands/files that you used to execute INT8 default quantization?


We would like to replicate this issue on our end. Thank you.

 

Regards,

Adli


0 Kudos
Highlighted
Employee
27 Views

Hi,

I am using a custom trained YoloV4 weight. So that I have limitations to share it in a public forum.

 

Please let me know is there any way to connect with you internally. 

 

Thanks,

Rahila T

0 Kudos
Highlighted
Community Manager
5 Views

Hi Rahali,

 

Regarding the YOLO V4 model, what is the application of the model do? Is it for Classification?

 

For your information, the current release of the OpenVINO toolkit does not officially support YOLO V4. However, if you still need our help, you can share the custom YOLO V4 weights via private message/email.

 

Regards,

Adli


0 Kudos