- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi intel,
When I quantified, I ran into the same problem as the link below.
https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/INT8-optimization-for-YOLOv4/m-p/1235491#M21838
I use the teaching linked below to quantify the model. (Post-Training Optimization Tool)
https://docs.openvinotoolkit.org/latest/pot_README.html
The yolov4 I use is my own customized network model, which is pruned from yolov4-tiny-3l, so it is not a general yolov4-tiny architecture.
I have attached the relevant documents in the attachment, please help to solve it.
This is the first time I have encountered yolov4 quantification failure.
Error:
INFO:app.run:Output log dir: backup/yolov4-tiny-3l-gray-license_plate_prune_0.46_keep_0.01_AccuracyAwareQuantization/2021-04-08_16-39-59
INFO:app.run:Creating pipeline:
Algorithm: AccuracyAwareQuantization
Parameters:
preset : performance
stat_subset_size : 300
maximal_drop : 0.01
target_device : ANY
model_type : None
dump_intermediate_model : False
exec_log_dir : backup/yolov4-tiny-3l-gray-license_plate_prune_0.46_keep_0.01_AccuracyAwareQuantization/2021-04-08_16-39-59
===========================================================================
IE version: 2.1.2021.3.0-2787-60059f2c755-releases/2021/3
Loaded CPU plugin version:
CPU - MKLDNNPlugin: 2.1.2021.3.0-2787-60059f2c755-releases/2021/3
Annotation conversion for VOC2012 dataset has been started
Parameters to be used for conversion:
converter: voc_detection
annotations_dir: VOCdevkit/VOC2012/Annotations
images_dir: VOCdevkit/VOC2012/JPEGImages
imageset_file: VOCdevkit/VOC2012/ImageSets/Main/val.txt
dataset_meta_file: VOCdevkit/VOC2012/label_map.json
Annotation conversion for VOC2012 dataset has been finished
INFO:compression.statistics.collector:Start computing statistics for algorithms : AccuracyAwareQuantization
16:40:00 accuracy_checker WARNING: /opt/intel/openvino_2021/deployment_tools/model_optimizer/extensions/ops/fakequantize.py:84: RuntimeWarning: invalid value encountered in less_equal
underflow_mask = x <= input_low
16:40:00 accuracy_checker WARNING: /opt/intel/openvino_2021/deployment_tools/model_optimizer/extensions/ops/fakequantize.py:85: RuntimeWarning: invalid value encountered in greater
overflow_mask = x > input_high
INFO:compression.statistics.collector:Computing statistics finished
INFO:compression.pipeline.pipeline:Start algorithm: AccuracyAwareQuantization
INFO:compression.algorithms.quantization.accuracy_aware.algorithm:Start original model inference
16:40:03 accuracy_checker WARNING: /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo/back/ie_ir_ver_2/emitter.py:243: DeprecationWarning: This method will be removed in future versions. Use 'list(elem)' or iteration over elem instead.
if len(element.attrib) == 0 and len(element.getchildren()) == 0:
INFO:compression.engines.ac_engine:Start inference of 26 images
Total dataset size: 26
26 objects processed in 5.648 seconds
INFO:compression.engines.ac_engine:Inference finished
INFO:compression.algorithms.quantization.accuracy_aware.algorithm:Baseline metrics: {'detection_accuracy': 0.0}
INFO:compression.algorithms.quantization.accuracy_aware.algorithm:Start quantization
INFO:compression.algorithms.quantization.default.algorithm:Start computing statistics for algorithm : ActivationChannelAlignment
INFO:compression.algorithms.quantization.default.algorithm:Computing statistics finished
INFO:compression.algorithms.quantization.default.algorithm:Start computing statistics for algorithms : MinMaxQuantization,FastBiasCorrection
INFO:compression.algorithms.quantization.default.algorithm:Computing statistics finished
INFO:compression.algorithms.quantization.accuracy_aware.algorithm:Start compressed model inference
INFO:compression.engines.ac_engine:Start inference of 26 images
Total dataset size: 26
26 objects processed in 5.687 seconds
INFO:compression.engines.ac_engine:Inference finished
INFO:compression.algorithms.quantization.accuracy_aware.algorithm:Fully quantized metrics: {'detection_accuracy': 0.0}
INFO:compression.algorithms.quantization.accuracy_aware.algorithm:Accuracy drop: {'detection_accuracy': 0.0}
INFO:compression.pipeline.pipeline:Finished: AccuracyAwareQuantization
===========================================================================
Traceback (most recent call last):
File "/usr/local/bin/pot", line 33, in <module>
sys.exit(load_entry_point('pot==1.0', 'console_scripts', 'pot')())
File "/opt/intel/openvino_2021/deployment_tools/tools/post_training_optimization_toolkit/app/run.py", line 37, in main
app(sys.argv[1:])
File "/opt/intel/openvino_2021/deployment_tools/tools/post_training_optimization_toolkit/app/run.py", line 56, in app
metrics = optimize(config)
File "/opt/intel/openvino_2021/deployment_tools/tools/post_training_optimization_toolkit/app/run.py", line 126, in optimize
compress_model_weights(compressed_model)
File "/opt/intel/openvino_2021/deployment_tools/tools/post_training_optimization_toolkit/compression/graph/model_utils.py", line 64, in compress_model_weights
compress_weights(model_dict['model'])
File "/opt/intel/openvino_2021/deployment_tools/tools/post_training_optimization_toolkit/compression/graph/passes.py", line 771, in compress_weights
model.clean_up()
File "/opt/intel/openvino_2021/deployment_tools/model_optimizer/mo/graph/graph.py", line 1004, in clean_up
shape_inference(self)
File "/opt/intel/openvino_2021/deployment_tools/model_optimizer/mo/middle/passes/eliminate.py", line 168, in shape_inference
node.infer(node)
File "/opt/intel/openvino_2021/deployment_tools/model_optimizer/extensions/ops/Cast.py", line 58, in infer
new_blob, finite_match_count, zero_match_count = convert_blob(node.in_node(0).value, dst_type)
File "/opt/intel/openvino_2021/deployment_tools/model_optimizer/mo/middle/passes/convert_data_type.py", line 97, in convert_blob
raise Error('The conversion of blob with value "{}" to dst_type "{}" results in rounding'.format(
mo.utils.error.Error: The conversion of blob with value "[[[[nan nan nan]
[nan nan nan]
[nan nan nan]]
[[nan nan nan]
[nan nan nan]
[nan nan nan]]
[[nan nan nan]
[nan nan nan]
[nan nan nan]]
...
[[nan nan nan]
[nan nan nan]
[nan nan nan]]
[[nan nan nan]
[nan nan nan]
[nan nan nan]]
[[nan nan nan]
[nan nan nan]
[nan nan nan]]]
[[[nan nan nan]
[nan nan nan]
[nan nan nan]]
[[nan nan nan]
[nan nan nan]
[nan nan nan]]
[[nan nan nan]
[nan nan nan]
[nan nan nan]]
...
[[nan nan nan]
[nan nan nan]
[nan nan nan]]
[[nan nan nan]
[nan nan nan]
[nan nan nan]]
[[nan nan nan]
[nan nan nan]
[nan nan nan]]]
[[[nan nan nan]
[nan nan nan]
[nan nan nan]]
[[nan nan nan]
[nan nan nan]
[nan nan nan]]
[[nan nan nan]
[nan nan nan]
[nan nan nan]]
...
[[nan nan nan]
[nan nan nan]
[nan nan nan]]
[[nan nan nan]
[nan nan nan]
[nan nan nan]]
[[nan nan nan]
[nan nan nan]
[nan nan nan]]]
...
[[[nan nan nan]
[nan nan nan]
[nan nan nan]]
[[nan nan nan]
[nan nan nan]
[nan nan nan]]
[[nan nan nan]
[nan nan nan]
[nan nan nan]]
...
[[nan nan nan]
[nan nan nan]
[nan nan nan]]
[[nan nan nan]
[nan nan nan]
[nan nan nan]]
[[nan nan nan]
[nan nan nan]
[nan nan nan]]]
[[[nan nan nan]
[nan nan nan]
[nan nan nan]]
[[nan nan nan]
[nan nan nan]
[nan nan nan]]
[[nan nan nan]
[nan nan nan]
[nan nan nan]]
...
[[nan nan nan]
[nan nan nan]
[nan nan nan]]
[[nan nan nan]
[nan nan nan]
[nan nan nan]]
[[nan nan nan]
[nan nan nan]
[nan nan nan]]]
[[[nan nan nan]
[nan nan nan]
[nan nan nan]]
[[nan nan nan]
[nan nan nan]
[nan nan nan]]
[[nan nan nan]
[nan nan nan]
[nan nan nan]]
...
[[nan nan nan]
[nan nan nan]
[nan nan nan]]
[[nan nan nan]
[nan nan nan]
[nan nan nan]]
[[nan nan nan]
[nan nan nan]
[nan nan nan]]]]" to dst_type "<class 'numpy.int8'>" results in rounding
- Tags:
- Intel openvino
- Intel® Core™ Processors
- Model Optimizer
- ntel® Pentium® Processors
- OpenVino
- Post training Optimizer Tool
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Just want to clarify, did you manage to run this before quantization?
If yes, may I know with which OpenVINO demo?
Plus, could you share the link to the original Yolov4 3l model that you are using?, so that I could freshly download from it.
Sincerely,
Iffa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you for your reply, I solved this problem myself. Several parameters are missing on the conversion python script.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Great to hear that!.
It would be helpful to those that face the same problem as yours if you share your findings.
Sincerely,
Iffa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Greetings,
Intel will no longer monitor this thread since this issue has been resolved. If you need any additional information from Intel, please submit a new question
Sincerely,
Iffa
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page