Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6385 Discussions

INT8 Quantizated model fails to load with nullptr error

NewMember
New Contributor I
560 Views

Snippet of logs from post-training optimization toolkit 

IE version: 2.1.2020.4.0-359-21e092122f4-releases/2020/4
Loaded CPU plugin version:
    CPU - MKLDNNPlugin: 2.1.2020.4.0-359-21e092122f4-releases/2020/4
INFO:compression.statistics.collector:Start computing statistics for algorithms : AccuracyAwareQuantization
INFO:compression.statistics.collector:Computing statistics finished
INFO:compression.pipeline.pipeline:Start algorithm: AccuracyAwareQuantization
INFO:compression.algorithms.quantization.accuracy_aware.algorithm:Start original model inference
INFO:compression.engines.ac_engine:Start inference of 2962 images
Total dataset size: 2962
15:09:05 accuracy_checker WARNING: /opt/intel/openvino_2020.4.287/deployment_tools/open_model_zoo/tools/accuracy_checker/accuracy_checker/logging.py:111: UserWarning: data batch 18 is not equal model input batch_size 32.
  warnings.warn(msg)

2962 objects processed in 38.678 seconds
INFO:compression.engines.ac_engine:Inference finished
INFO:compression.algorithms.quantization.accuracy_aware.algorithm:Baseline metrics: {'cmc': 133.43610723641677}
INFO:compression.algorithms.quantization.accuracy_aware.algorithm:Start quantization
INFO:compression.statistics.collector:Start computing statistics for algorithms : ActivationChannelAlignment
INFO:compression.statistics.collector:Computing statistics finished
INFO:compression.statistics.collector:Start computing statistics for algorithms : MinMaxQuantization,FastBiasCorrection
INFO:compression.statistics.collector:Computing statistics finished
INFO:compression.algorithms.quantization.accuracy_aware.algorithm:Start compressed model inference
Traceback (most recent call last):
  File "/home/vulcanadmin/anaconda3/envs/nm/bin/pot", line 11, in <module>
    load_entry_point('pot==1.0', 'console_scripts', 'pot')()
  File "/home/vulcanadmin/anaconda3/envs/nm/lib/python3.6/site-packages/pot-1.0-py3.6.egg/app/run.py", line 37, in main
    app(sys.argv[1:])
  File "/home/vulcanadmin/anaconda3/envs/nm/lib/python3.6/site-packages/pot-1.0-py3.6.egg/app/run.py", line 56, in app
    metrics = optimize(config)
  File "/home/vulcanadmin/anaconda3/envs/nm/lib/python3.6/site-packages/pot-1.0-py3.6.egg/app/run.py", line 123, in optimize
    compressed_model = pipeline.run(model)
  File "/home/vulcanadmin/anaconda3/envs/nm/lib/python3.6/site-packages/pot-1.0-py3.6.egg/compression/pipeline/pipeline.py", line 54, in run
    model = self.collect_statistics_and_run(model, current_algo_seq)
  File "/home/vulcanadmin/anaconda3/envs/nm/lib/python3.6/site-packages/pot-1.0-py3.6.egg/compression/pipeline/pipeline.py", line 67, in collect_statistics_and_run
    model = algo.run(model)
  File "/home/vulcanadmin/anaconda3/envs/nm/lib/python3.6/site-packages/pot-1.0-py3.6.egg/compression/algorithms/quantization/accuracy_aware/algorithm.py", line 158, in run
    print_progress=print_progress)
  File "/home/vulcanadmin/anaconda3/envs/nm/lib/python3.6/site-packages/pot-1.0-py3.6.egg/compression/algorithms/quantization/accuracy_aware/algorithm.py", line 509, in _quantize_and_evaluate
    print_progress=print_progress)
  File "/home/vulcanadmin/anaconda3/envs/nm/lib/python3.6/site-packages/pot-1.0-py3.6.egg/compression/algorithms/quantization/accuracy_aware/algorithm.py", line 203, in _evaluate_model
    self._engine.set_model(model, for_stat_collection=True)
  File "/home/vulcanadmin/anaconda3/envs/nm/lib/python3.6/site-packages/pot-1.0-py3.6.egg/compression/engines/ac_engine.py", line 87, in set_model
    stdout_redirect(_set_model, self._tmp_dir.name)
  File "/home/vulcanadmin/anaconda3/envs/nm/lib/python3.6/site-packages/pot-1.0-py3.6.egg/compression/utils/logger.py", line 114, in stdout_redirect
    res = fn(*args, **kwargs)
  File "/home/vulcanadmin/anaconda3/envs/nm/lib/python3.6/site-packages/pot-1.0-py3.6.egg/compression/engines/ac_engine.py", line 85, in _set_model
    self._set_model_from_files(paths)
  File "/home/vulcanadmin/anaconda3/envs/nm/lib/python3.6/site-packages/pot-1.0-py3.6.egg/compression/engines/ac_engine.py", line 59, in _set_model_from_files
    self._model = self._load_model(paths)
  File "/home/vulcanadmin/anaconda3/envs/nm/lib/python3.6/site-packages/pot-1.0-py3.6.egg/compression/engines/ac_engine.py", line 190, in _load_model
    self._model_evaluator.load_network_from_ir(paths)
  File "/opt/intel/openvino_2020.4.287/deployment_tools/open_model_zoo/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py", line 407, in load_network_from_ir
    self.launcher.load_ir(xml_path, bin_path)
  File "/opt/intel/openvino_2020.4.287/deployment_tools/open_model_zoo/tools/accuracy_checker/accuracy_checker/launcher/dlsdk_launcher.py", line 854, in load_ir
    self.load_network(log=log)
  File "/opt/intel/openvino_2020.4.287/deployment_tools/open_model_zoo/tools/accuracy_checker/accuracy_checker/launcher/dlsdk_launcher.py", line 849, in load_network
    self.exec_network = self.ie_core.load_network(self.network, self._device, num_requests=self.num_requests)
  File "ie_api.pyx", line 314, in openvino.inference_engine.ie_api.IECore.load_network
  File "ie_api.pyx", line 323, in openvino.inference_engine.ie_api.IECore.load_network
RuntimeError: Data 1 inserted into layer 573 is nullptr
0 Kudos
2 Replies
Iffa_Intel
Moderator
546 Views

Greetings,


First and foremost, please note that  INT8 calibration is not available in the following cases:

  • your project uses a generated dataset
  • your project uses a model with Intermediate Representation (IR) versions lower than 10
  • your model is already calibrated
  • you run the project on a Intel® Processor Graphics, Intel® Movidius™ Neural Compute Stick 2, or Intel® Vision Accelerator Design with Intel® Movidius™ VPUs plugin


This is the Official Openvino Quantization tutorial: https://docs.openvinotoolkit.org/latest/workbench_docs_Workbench_DG_Int_8_Quantization.html


If your model excluded from the conditions as stated above, can you describe more regarding your model?



Sincerely,

Iffa


0 Kudos
Iffa_Intel
Moderator
536 Views

Greetings,


Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question


Sincerely,

Iffa


0 Kudos
Reply