Intel® Distribution of OpenVINO™ Toolkit
Community support and discussions about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all things computer vision-related on Intel® platforms.
5766 Discussions

Converting custom-trained SSD_Inception_v2 to intermediate representation for inference on Intel Neural Stick1



I have been trying to train my own object detection (single class) model and then create the bin,xml files so that I can perform inference on a RaspberryPi equipped with OpenVino.

I trained a SSD_Inception_v2 model successfully using Tensorflow Object Detection API (tensorflow version is 1.14.0 and the TensorFlow Models v.1.13.0 release is used) and then transformed the trained model to frozen inference graph (pb) file which I could successfully use to make inference and detect my objects of interest with F1-score of 93.

Link to download the pb and config files after training and exporting:

Then, I used openvino_2020.2.117 on my PC with Windows 10 OS, to transform the pb model to intermediate representation. It was done with no error and produced the .xml and .bin (as well as a .mapping) files. Here is the command line for the conversion (<path> is the absolute path to the directory where the frozen model is located on my computer, and <installation path> is the directory where openvino is installed on my computer). I suspect whether I needed to make any changes to the original json file???:

python --input_model=<path>/frozen_inference_graph.pb 
--transformations_config "<installation path>/openvino_2020.2.117/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json" 
--tensorflow_object_detection_api_pipeline_config <path>/pipeline.config --reverse_input_channels

(I also tried the command with/wo the argument --input_shape [1,300,300,3])

I then copied the xml and bin files to my Raspberry Pi. On my RaspberryPi4B I have openvino toolkit 2020.2.120 (which was the closest version I could find to my Windows toolkit). I am sure that the openvino is installed properly and my Intel Neural Stick1 usb is loaded correctly because I can use the sample SSD object detection for face detection and make inference with no problem.

Link to download the created xml,bin,mapping files after conversion:

Now, when I try the following code (from inside the folder where my xml and bin files are copied to) to perform inference using my own trained model on a test image "im1.jpg" (you can download the test image from here: )

import cv2 as cv
frame = cv.imread('/home/pi/Downloads/im1.jpg')
if frame is None:
    raise Exception('Image not found!')
_, confidences, boxes = net.detect(frame, confThreshold=0.5)
for confidence, box in zip(list(confidences), boxes):
    cv.rectangle(frame, box, color=(0, 255, 0))
cv.imwrite('/home/pi/Downloads/im1_out.png', frame)

I get the error:

"terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException' what():
PriorBoxClustered_0/naked_not_unsqueezed has zero dimension that is not allowable"

My guess is the IR conversion is not done correctly. Can you please help me identify what I did wrong during the conversion to cause this error?

Many regards,


0 Kudos
3 Replies

Hi Sch,

Thanks for reaching out. There is an issue when using newer IR format with Raspbian OS. Please try converting the frozen model to IR format by adding the following flag: 


We managed to run your code with the IR files converted successfully. Test it and tell us if the issue persists.

Best regards,




Hi David,

 Your solution worked very well. Just so that I don't create the same issue next times, what is the best version of Tensorflow and the version of Tensorflow Object Detection API that you would suggest to use?



Hello Sch,

It is great it worked for you! Regarding the other question, there is no "best version", any version that suit your needs would be useful. Also, the latest versions of OpenVINO™ toolkit have added support for TF 1.15.X. 

Best regards,