Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6256 Discussions

error while converting to INT8 using dl_workbench

TarunM
Beginner
268 Views

able to conver onnx model to IR fp16 but getting error in coverting to int8. I am using Deep Learning workbench.

The error:-

[setupvars.sh] OpenVINO environment initialized
[RUN COMMAND] + pot --direct-dump --progress-bar --output-dir /opt/intel/openvino_2022/tools/workbench/wb/data/int8_calibration_artifacts/47/job_artifacts --config /opt/intel/openvino_2022/tools/workbench/wb/data/int8_calibration_artifacts/47/scripts/int8_calibration.config.json
2022-07-05 07:43:59.038679: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/opt/intel/openvino_2022.1.0.643/extras/opencv/python/cv2/../../bin:/opt/intel/openvino/extras/opencv/lib:/opt/intel/openvino/tools/compile_tool:/opt/intel/openvino/runtime/3rdparty/tbb/lib::/opt/intel/openvino/runtime/3rdparty/hddl/lib:/opt/intel/openvino/runtime/lib/intel64::/opt/intel/openvino_2022.1.0.643/extras/opencv/python/cv2/../../bin:/opt/intel/openvino/extras/opencv/lib:/opt/intel/openvino/runtime/lib/intel64:/opt/intel/openvino/tools/compile_tool:/opt/intel/openvino/runtime/3rdparty/tbb/lib:/opt/intel/openvino/runtime/3rdparty/hddl/lib
2022-07-05 07:43:59.038700: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.

0%| |00:00Traceback (most recent call last):
File "/usr/local/bin/pot", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.8/dist-packages/openvino/tools/pot/app/run.py", line 26, in main
app(sys.argv[1:])
File "/usr/local/lib/python3.8/dist-packages/openvino/tools/pot/app/run.py", line 56, in app
metrics = optimize(config)
File "/usr/local/lib/python3.8/dist-packages/openvino/tools/pot/app/run.py", line 114, in optimize
engine = create_engine(config.engine, data_loader=data_loader, metric=None)
File "/usr/local/lib/python3.8/dist-packages/openvino/tools/pot/engines/creator.py", line 16, in create_engine
return ACEngine(config)
File "/usr/local/lib/python3.8/dist-packages/openvino/tools/pot/engines/ac_engine.py", line 36, in __init__
self._model_evaluator = create_model_evaluator(config)
File "/usr/local/lib/python3.8/dist-packages/openvino/tools/accuracy_checker/evaluators/quantization_model_evaluator.py", line 38, in create_model_evaluator
return ModelEvaluator.from_configs(config)
File "/usr/local/lib/python3.8/dist-packages/openvino/tools/accuracy_checker/evaluators/quantization_model_evaluator.py", line 75, in from_configs
adapter = None if not config_adapter else create_adapter(
File "/usr/local/lib/python3.8/dist-packages/openvino/tools/accuracy_checker/adapters/adapter.py", line 173, in create_adapter
adapter = Adapter.provide(adapter_type, adapter_config, label_map=label_map,
File "/usr/local/lib/python3.8/dist-packages/openvino/tools/accuracy_checker/dependency.py", line 75, in provide
return root_provider(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/openvino/tools/accuracy_checker/adapters/adapter.py", line 35, in __init__
self.configure()
File "/usr/local/lib/python3.8/dist-packages/openvino/tools/accuracy_checker/adapters/yolo.py", line 419, in configure
raise ConfigError('anchor mask should be specified for all output layers')
openvino.tools.accuracy_checker.config.config_validator.ConfigError: anchor mask should be specified for all output layers

0 Kudos
2 Replies
Wan_Intel
Moderator
238 Views

Hi TarunM,

Thanks for reaching out to us.

Could you please share your model and dataset with us for further investigation?

Meanwhile, could you please try to Configure INT8 Calibration Settings and see if you are able to convert your model into INT8?

 

 

Regards,

Wan


0 Kudos
Wan_Intel
Moderator
218 Views

Hi TarunM,

Thanks for your question.

This thread will no longer be monitored since we have provided suggestions. 

If you need any additional information from Intel, please submit a new question.

 

 

Best regards,

Wan


0 Kudos
Reply