Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Gurasz__Ky
Beginner
55 Views

Trying to get a IR from a frozen TF model

I have a given inceptionV2 model I want to get working on the rPi, using the NCS2. Examples work fine. Now, the model I am given is a built upon the ssd_inceptionv2 demo, which I know works, since I've been able to convert that demo's frozen pb to IR bin and xml files, and successfully run them on the pi. However, when I try to convert the given model to an IR, it fails. To be more specific, it fails in different ways, depending on how I go about trying to convert it.

The given model has a frozen .pb file, checkpoint files and a .pbtxt. Converting the .pb file the command I'm using is: 

python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py --input_model frozengraph.pb --tensorflow_use_custom_operations_config /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline "PATH"/pipeline.config --reverse_input_channels --data_type FP16

this gives the input shape error, which I remedy with --input_shape [1,299,299,3], but it only leads to the error:

Cannot infer shapes or values for node "Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField/TopKV2"

 

So I try both re-freezing the model and running the conversion on the graph.pbtxt. For both methods, it throws errors since the number of nodes is 0 and 1 respectively.

Any ideas what I could be doing wrong here? It's driving me nuts.

 

0 Kudos
1 Reply
Shubha_R_Intel
Employee
55 Views

Hello Ky. Look at your frozen *.pbtxt file. What value does "Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField/TopKV2" have ? is there a -1 anywhere ?

Did you get your model from here ?

http://download.tensorflow.org/models/object_detection/ssd_inception_v2_coco_2018_01_28.tar.gz

To me it looks like you followed instructions here correctly:

http://docs.openvinotoolkit.org/R5/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Objec...

Your command looks similar to :

<INSTALL_DIR>/deployment_tools/model_optimizer/mo_tf.py --input_model=/tmp/ssd_inception_v2_coco_2018_01_28/frozen_inference_graph.pb --tensorflow_use_custom_operations_config <INSTALL_DIR>/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config /tmp/ssd_inception_v2_coco_2018_01_28/pipeline.config --reverse_input_channels

Please carefully read the section within Custom Input Shape in the documentation. Please add a --log_level DEBUG to see more details of your MO failure. Feel free to re-post your DEBUG log here.

Reply