Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Trying to get a IR from a frozen TF model


I have a given inceptionV2 model I want to get working on the rPi, using the NCS2. Examples work fine. Now, the model I am given is a built upon the ssd_inceptionv2 demo, which I know works, since I've been able to convert that demo's frozen pb to IR bin and xml files, and successfully run them on the pi. However, when I try to convert the given model to an IR, it fails. To be more specific, it fails in different ways, depending on how I go about trying to convert it.

The given model has a frozen .pb file, checkpoint files and a .pbtxt. Converting the .pb file the command I'm using is: 

python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/ --input_model frozengraph.pb --tensorflow_use_custom_operations_config /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline "PATH"/pipeline.config --reverse_input_channels --data_type FP16

this gives the input shape error, which I remedy with --input_shape [1,299,299,3], but it only leads to the error:

Cannot infer shapes or values for node "Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField/TopKV2"


So I try both re-freezing the model and running the conversion on the graph.pbtxt. For both methods, it throws errors since the number of nodes is 0 and 1 respectively.

Any ideas what I could be doing wrong here? It's driving me nuts.


0 Kudos
1 Reply

Hello Ky. Look at your frozen *.pbtxt file. What value does "Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField/TopKV2" have ? is there a -1 anywhere ?

Did you get your model from here ?

To me it looks like you followed instructions here correctly:

Your command looks similar to :

<INSTALL_DIR>/deployment_tools/model_optimizer/ --input_model=/tmp/ssd_inception_v2_coco_2018_01_28/frozen_inference_graph.pb --tensorflow_use_custom_operations_config <INSTALL_DIR>/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config /tmp/ssd_inception_v2_coco_2018_01_28/pipeline.config --reverse_input_channels

Please carefully read the section within Custom Input Shape in the documentation. Please add a --log_level DEBUG to see more details of your MO failure. Feel free to re-post your DEBUG log here.

0 Kudos