Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

INT8 calibration_tool error

Hou_y_1
Beginner
600 Views

I have a model trained offline, I converted it  to IR (xml and bin, fp32). when I used the calibration_tool to generate INT8 model,  met a exception: 

my command :  ./calibration_tool -t C -i ../../data/ -m ../../xx_tensor.xml

[ INFO ] InferenceEngine: 
	API version ............ 1.4
	Build .................. 19154
[ INFO ] Parsing input parameters
[ INFO ] Loading plugin

	API version ............ 1.5
	Build .................. lnx_20181004
	Description ....... MKLDNNPlugin
[ INFO ] Loading network files
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 32
[ INFO ] Collecting accuracy metric in FP32 mode to get a baseline, collecting activation statistics
Progress: [....................] 100.00% done
  FP32 Accuracy: 0.00% 
[ INFO ] Verification of network accuracy if all possible layers converted to INT8
Validate int8 accuracy, threshold for activation statistics = 100.00
/opt/intel/computer_vision_sdk/inference_engine/samples/calibration_tool/main.cpp_440
/opt/intel/computer_vision_sdk/inference_engine/samples/calibration_tool/main.cpp_442
/opt/intel/computer_vision_sdk/inference_engine/samples/calibration_tool/calibrator_processors.cpp_217
/opt/intel/computer_vision_sdk/inference_engine/samples/calibration_tool/calibrator_processors.cpp_232
../../xx_tensor.bin
/opt/intel/computer_vision_sdk/inference_engine/samples/calibration_tool/calibrator_processors.cpp_243
/opt/intel/computer_vision_sdk/inference_engine/samples/calibration_tool/calibrator_processors.cpp_249
[ ERROR ] Inference problem: 
min and max sizes should be equal to channels count

The model (IR) can be excute normally with mkldnn plugin. how can i convert to int8 model by calibration tool?

0 Kudos
6 Replies
Hou_y_1
Beginner
599 Views

I attach my model.

0 Kudos
Shubha_R_Intel
Employee
599 Views

Hi Hou. I was able to reproduce your problem however xx_tensor.labels is missing from your zip file. I don't think it has any relevance to the issue however. Please remain patient while I investigate this issue. Thanks for using OpenVino !

Shubha

0 Kudos
Hou_y_1
Beginner
601 Views

Shubha R. (Intel) wrote:

Hi Hou. I was able to reproduce your problem however xx_tensor.labels is missing from your zip file. I don't think it has any relevance to the issue however. Please remain patient while I investigate this issue. Thanks for using OpenVino !

Shubha

 

thanks for you reply, the model is binary classfication.  I upload a part of data and label file.

0 Kudos
Shubha_R_Intel
Employee
601 Views

Dear Hou:

If you look at your IR XML your output dimensions are 2x32 which is not the output size for a traditional classification network. So instead of -t C use instead -t RawC and it will work.

layer id="77" name="ip2" precision="FP32" type="FullyConnected">

            <data out-size="2"/>

            <input>

                <port id="0">

                    <dim>32</dim>

                    <dim>256</dim>

                </port>

            </input>

            <output>

                <port id="2">

                    <dim>32</dim>

                    <dim>2</dim>

                </port>

            </output>

Thanks for using OpenVIno !

Shubha

0 Kudos
Hou_y_1
Beginner
601 Views

Shubha R. (Intel) wrote:

Dear Hou:

If you look at your IR XML your output dimensions are 2x32 which is not the output size for a traditional classification network. So instead of -t C use instead -t RawC and it will work.

layer id="77" name="ip2" precision="FP32" type="FullyConnected">

            <data out-size="2"/>

            <input>

                <port id="0">

                    <dim>32</dim>

                    <dim>256</dim>

                </port>

            </input>

            <output>

                <port id="2">

                    <dim>32</dim>

                    <dim>2</dim>

                </port>

            </output>

Thanks for using OpenVIno !

Shubha

 

I excute the command as "./calibration_tool -t RawC -i ../../data/ -m ../../xx_tensor.xml",  generate xx_tensor_i8.xml and bin. But I inference with Int8 model, met the same error.

LoadNetwork error
min and max sizes should be equal to channels count
..\src\inference_engine\cnn_network_int8_normalizer.cpp:106[NETWORK_NOT_LOADED]
../src/inference_engine/cpp_interfaces/impl/ie_plugin_internal.hpp:132_k:\openclworkspace\openvino_proj\openvino_proj\inferenceengine.cpp_214

32 is just batch size, in my deploy model, the num is 1. I alter it to 32 in order to  speed up  processing of calibration. I changle it to 1, and excute  "./calibration_tool -t C -i ../../data/ -m ../../xx_tensor.xml",  the same error is still happened. I attach my model. 

______________________________________________________________________________________________________________________________________

I have other question. how to calibrate  Metric learning model ?  I train model as classification, but I don't need results of the  last softmax layer, or other classifcation layers.  the number of the classifcation is huge. I can delete the last layer, using "-t RawC" to get  statistics ?  

Looking forward to your reply. thanks.

0 Kudos
Shubha_R_Intel
Employee
602 Views

Dear Hou, sorry about that. What do you mean by "Metric learning model"  or is that the kind of model you're dealing with here  (it's just a name, in other words) ?

May I ask, how are you inferencing ? Are you running one of the OpenVino samples ? Or have you written your own code ? 

If you have written your own code can you also include that in the zip file - or you can send me a private message with the attachment (use Send Author A Message link).

Thanks,

Shubha

0 Kudos
Reply