Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

NaN values while using Post Training Optimization Toolkit

Sruthikeerthi
Beginner
593 Views

While trying to convert an FP32 YOLO V4 model using the default quantization algorithm, there are NaN values. The following error message is seen.

accuracy_checker WARNING: /home/user/intel/openvino_2020.3.194/deployment_tools/model_optimizer/extensions/ops/fakequantize.py:85: RuntimeWarning: invalid value encountered in less_equal
underflow_mask = x <= input_low

13:07:04 accuracy_checker WARNING: /home/user/intel/openvino_2020.3.194/deployment_tools/model_optimizer/extensions/ops/fakequantize.py:86: RuntimeWarning: invalid value encountered in greater
overflow_mask = x > input_high

Traceback (most recent call last):
File "/home/user/.conda/envs/yolov4-openvino/bin/pot", line 11, in <module>
load_entry_point('pot==1.0', 'console_scripts', 'pot')()
File "/home/user/.conda/envs/yolov4-openvino/lib/python3.7/site-packages/pot-1.0-py3.7.egg/app/run.py", line 36, in main
app(sys.argv[1:])
File "/home/user/.conda/envs/yolov4-openvino/lib/python3.7/site-packages/pot-1.0-py3.7.egg/app/run.py", line 55, in app
metrics = optimize(config)
File "/home/user/.conda/envs/yolov4-openvino/lib/python3.7/site-packages/pot-1.0-py3.7.egg/app/run.py", line 125, in optimize
compress_weights(compressed_model)
File "/home/user/.conda/envs/yolov4-openvino/lib/python3.7/site-packages/pot-1.0-py3.7.egg/compression/graph/passes.py", line 682, in compress_weights
model.clean_up()
File "/home/user/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo/graph/graph.py", line 973, in clean_up
shape_inference(self)
File "/home/user/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo/middle/passes/eliminate.py", line 169, in shape_inference
node.infer(node)
File "/home/user/intel/openvino_2020.3.194/deployment_tools/model_optimizer/extensions/ops/Cast.py", line 60, in infer
new_blob, finite_match_count, zero_match_count = convert_blob(node.in_node(0).value, dst_type)
File "/home/user/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo/middle/passes/convert_data_type.py", line 97, in convert_blob
blob, dst_type))
mo.utils.error.Error: The conversion of blob with value "[[[[ nan]]

[[ nan]]

[[ nan]]

...

[[ nan]]

[[ nan]]

[[ nan]]]


[[[158.]]

[[100.]]

[[107.]]

...

[[129.]]

[[130.]]

[[105.]]]


[[[ 0.]]

[[114.]]

[[232.]]

...

[[ 43.]]

[[ 0.]]

[[ 9.]]]


...


[[[142.]]

[[ 98.]]

[[ 93.]]

...

[[118.]]

[[131.]]

[[ 95.]]]


[[[ 73.]]

[[212.]]

[[167.]]

...

[[119.]]

[[254.]]

[[ 38.]]]


[[[141.]]

[[178.]]

[[149.]]

...

[[117.]]

[[149.]]

[[169.]]]]" to dst_type "<class 'numpy.uint8'>" results in rounding

OpenVINO Version : 2020.3.194

0 Kudos
2 Replies
IntelSupport
Community Manager
575 Views

Hi Sruthikeerthi,

 

Thank you for reaching out to us. If possible, I would suggest you install the latest version of the OpenVINO toolkit which is the 2021.1 version, and try running the quantization command again. Please refer to the following link to download:

https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html

 

If it doesn't work, please share some additional information regarding:

  • Command/instructions used for model quantization process
  • YoloV4 model information

 

Regards,

Adli

 

0 Kudos
IntelSupport
Community Manager
526 Views

Hi Sruthikeerthi,


This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.

 

Regards,

Adli


0 Kudos
Reply