Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6401 Discussions

Custom dataset image classification INT8 quantization

msolmaz
Employee
1,074 Views

I am trying to do INT8 optimization using DL Workbench. My model was trained in Tensorflow and the feature extraction is based on DenseNet. I was able to convert to IR in fp32 precision and use the inference engine correctly.

But when I did default optimization (with unannotated dataset), the inference results were horrible. The classifier only predicted a single class with the same probability (softmax output) for all the images: 0.3559512794017792. 

Then I used the link (https://docs.openvino.ai/latest/workbench_docs_Workbench_DG_Dataset_Types.html#imagenet) to create a custom ImageNet dataset with images and val.txt. I was able to upload to DL workbench without an issue. However, when I tried to do Accuracy Aware Optimization, I receive the following error. I believe this issue is because my custom dataset images are grayscale and my model expects (1,1,300,300) shaped images. 

So I have 2 questions:

  1. Why the blank INT8 optimization would fail miserably. I used a code similar this page and fp32 inferencing is spot on.
  2. Does accuracy aware optimization only work with RGB images?

 

DL Workbench Error Log:

[setupvars.sh] OpenVINO environment initialized
[RUN COMMAND] + pot --direct-dump --progress-bar --output-dir /home/workbench/.workbench/models/73/359/job_artifacts --config /home/workbench/.workbench/models/73/359/scripts/int8_calibration.config.json
0%| |00:00 0%| |00:00 0%| |00:00Traceback (most recent call last):
File "/usr/local/bin/pot", line 11, in <module>
load_entry_point('pot==1.0', 'console_scripts', 'pot')()
File "/opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/app/run.py", line 36, in main
app(sys.argv[1:])
File "/opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/app/run.py", line 60, in app
metrics = optimize(config)
File "/opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/app/run.py", line 138, in optimize
compressed_model = pipeline.run(model)
File "/opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/compression/pipeline/pipeline.py", line 51, in run
model = self.collect_statistics_and_run(model, current_algo_seq)
File "/opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/compression/pipeline/pipeline.py", line 64, in collect_statistics_and_run
model = algo.run(model)
File "/opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/compression/algorithms/quantization/accuracy_aware/algorithm.py", line 152, in run
self._evaluate_model(model=model,
File "/opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/compression/algorithms/quantization/accuracy_aware/algorithm.py", line 509, in _evaluate_model
metrics, metrics_per_sample = evaluate_model(model, self._engine, self._dataset_size,
File "/opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/compression/algorithms/quantization/accuracy_aware/utils.py", line 289, in evaluate_model
(metrics_per_sample, metrics), raw_output = engine.predict(stats_layout=stats_layout,
File "/opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/compression/engines/ac_engine.py", line 166, in predict
stdout_redirect(self._model_evaluator.process_dataset_async, **args)
File "/opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/compression/utils/logger.py", line 129, in stdout_redirect
res = fn(*args, **kwargs)
File "/opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/libs/open_model_zoo/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py", line 142, in process_dataset_async
self._fill_free_irs(free_irs, queued_irs, infer_requests_pool, dataset_iterator, **kwargs)
File "/opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/libs/open_model_zoo/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py", line 329, in _fill_free_irs
batch_input, batch_meta = self._get_batch_input(batch_inputs, batch_annotation)
File "/opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/libs/open_model_zoo/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py", line 80, in _get_batch_input
filled_inputs = self.input_feeder.fill_inputs(batch_input)
File "/opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/libs/open_model_zoo/tools/accuracy_checker/accuracy_checker/launcher/input_feeder.py", line 193, in fill_inputs
inputs = self.fill_non_constant_inputs(data_representation_batch)
File "/opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/libs/open_model_zoo/tools/accuracy_checker/accuracy_checker/launcher/input_feeder.py", line 186, in fill_non_constant_inputs
return self._transform_batch(
File "/opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/libs/open_model_zoo/tools/accuracy_checker/accuracy_checker/launcher/input_feeder.py", line 321, in _transform_batch
batch_data[layer_name] = self.input_transform_func(
File "/opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/libs/open_model_zoo/tools/accuracy_checker/accuracy_checker/launcher/dlsdk_launcher.py", line 840, in fit_to_input
return self._align_data_shape(data, layer_name, layout)
File "/opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/libs/open_model_zoo/tools/accuracy_checker/accuracy_checker/launcher/dlsdk_launcher.py", line 606, in _align_data_shape
return data.reshape(input_shape) if not self.disable_resize_to_input else data
ValueError: cannot reshape array of size 270000 into shape (1,1,300,300)

0 Kudos
1 Solution
VladimirG
Employee
1,004 Views
  • Assuming that this set of normalization parameters was intended for your model during training, it shouldn't have any effect on the results of quantization.
  • Both increasing the subset size used for quantization as well as swapping to a Mixed preset can lead to potential increases in the accuracy of quantized models, however it should be noted that doing so may noticeably increase the time required to quantize your model. Aside from that, Post-training Optimization Toolkit offers Tunable quantization algorithm, however it is not currently enabled in DL Workbench. Additionally, if your model was originally trained in Tensorflow or PyTorch frameworks, you may want to look into Quantization Aware Training with NNCF, which is provided as a standalone package.

Regards,
Vladimir

View solution in original post

4 Replies
VladimirG
Employee
1,053 Views

Hi msolmaz,
thank you for reaching out.

 

Judging by the error logs, you're trying to provide RGB images directly to a grayscale model input, which causes an error during reshaping. This issue can be resolved in 2 ways:

1) Manually convert your images to grayscale and upload as a separate dataset;
2) Adjust your accuracy config by adding preprocessing options in the dataset section of the listing. You can refer to the Accuracy Checker documentation for the full list of available options, but in your particular case, you need to include `rgb_to_gray` to resolve the issue.

 

As for the poor performance of the default quantization method, this is largely to be expected, but varies significantly on a model-by-model basis. The aim of the algorithm is to quantize as many layers as possible with no regards for the inference results, hence why the AccuracyAware option is provided along with it.

 

Regards,
Vladimir

0 Kudos
msolmaz
Employee
1,015 Views

Thank you Vladimir,

my images were already grayscale but I didn't create a YML file for rgb to grayscale conversion. Now I did that and created dataset_meta.json for class labels, and I definitely got better results. 

I have follow-up questions:

  • I am doing a data normalization as in the attached picture? mean: 255, std: -255. It doesn't give any errors during optimization, but do you think this could cause a problem?  
  • How can I improve the quantization results of Accuracy Aware optimization? Could more images help?

 

thanks,

Mehmet

 

0 Kudos
VladimirG
Employee
1,005 Views
  • Assuming that this set of normalization parameters was intended for your model during training, it shouldn't have any effect on the results of quantization.
  • Both increasing the subset size used for quantization as well as swapping to a Mixed preset can lead to potential increases in the accuracy of quantized models, however it should be noted that doing so may noticeably increase the time required to quantize your model. Aside from that, Post-training Optimization Toolkit offers Tunable quantization algorithm, however it is not currently enabled in DL Workbench. Additionally, if your model was originally trained in Tensorflow or PyTorch frameworks, you may want to look into Quantization Aware Training with NNCF, which is provided as a standalone package.

Regards,
Vladimir

Zulkifli_Intel
Moderator
984 Views

Hello msolmaz,


This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.


Sincerely,

Zulkifli


0 Kudos
Reply