Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6411 Discussions

Error Stopped shape/value propagation at "Postprocessor/Cast" node" IR conversion



  • anaconda python 3.6.8
  • tensorflow gpu 1.14
  • CUDA 10.0
  • openvino_2019.1.144

I'm trying to convert a trained model for object detection with Intel Myriad X with Tensorflow based on pre-trained ssd v2 inception. I've successfully trained starting from files of the model downloaded in tar.gz from here

I would recognize 5 new classes, so I added and labelled my images, created all right files and trained with python scripts from /tensorflow/models/object_detection :

 python --logtostderr --train_dir=./data_inception_v2 --pipeline_config_path=./data_inception_v2/ssd_inception_v2_coco.config

Then Exported the frozen graph with the chekpoint of my custom trained model

python --input_type image_tensor --pipeline_config_path ./data_inception_v2/ssd_inception_v2_coco.config --trained_checkpoint_prefix ./data_inception_v2/model.ckpt-10000 --output_directory ./inference_graph/inception_v2/

Now for converting model to IR I've tried almost every option in The error given by the system is

python --input_model /home/siemens/Desktop/object_detection_retraining/inference_graph/inception_v2/frozen_inference_graph.pb --tensorflow_object_detection_api_pipeline_config /home/siemens/Desktop/object_detection_retraining/inference_graph/inception_v2/pipeline.config --tensorflow_use_custom_operations_config ./extensions/front/tf/ssd_v2_support.json --output="detection_boxes,detection_scores,num_detections" --output_dir /home/siemens/Desktop/object_detection_retraining/openvino/inception_v2 --reverse_input_channels --input_shape [1,300,300,3]
Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:     /home/siemens/Desktop/object_detection_retraining/inference_graph/inception_v2/frozen_inference_graph.pb
    - Path for generated IR:     /home/siemens/Desktop/object_detection_retraining/openvino/inception_v2
    - IR output name:     frozen_inference_graph
    - Log level:     ERROR
    - Batch:     Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:     detection_boxes,detection_scores,num_detections
    - Input shapes:     [1,300,300,3]
    - Mean values:     Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:     FP32
    - Enable fusing:     True
    - Enable grouped convolutions fusing:     True
    - Move mean values to preprocess section:     False
    - Reverse input channels:     True
TensorFlow specific parameters:
    - Input model in text protobuf format:     False
    - Path to model dump for TensorBoard:     None
    - List of shared libraries with TensorFlow custom layers implementation:     None
    - Update the configuration file with input/output node names:     None
    - Use configuration file used to generate the model with Object Detection API:     /home/siemens/Desktop/object_detection_retraining/inference_graph/inception_v2/pipeline.config
    - Operations to offload:     None
    - Patterns to offload:     None
    - Use the config file:     /opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/./extensions/front/tf/ssd_v2_support.json
Model Optimizer version:     2019.1.1-83-g28dfbfd
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
WARNING: Logging before flag parsing goes to stderr.
E0630 21:50:09.085493 139859473712896] Cannot infer shapes or values for node "Postprocessor/Cast".
E0630 21:50:09.085652 139859473712896] 0
E0630 21:50:09.085696 139859473712896]
E0630 21:50:09.085741 139859473712896] It can happen due to bug in custom shape infer function <function Cast.infer at 0x7f33388b49d8>.
E0630 21:50:09.085775 139859473712896] Or because the node inputs have incorrect values/shapes.
E0630 21:50:09.085806 139859473712896] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
E0630 21:50:09.086285 139859473712896] Run Model Optimizer with --log_level=DEBUG for more information.
E0630 21:50:09.086350 139859473712896] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "Postprocessor/Cast" node.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.

I've tried also different times with different machines, using Tensorflow as virtualenv or with Anaconda but the problem still remain. On i've tried different option parametrs, like you can see in the code above, but the error still remain and is related to a "Stopped shape/value propagation at "Postprocessor/Cast" node.

Attached i put the --log_level DEBUG output, maybe can help  to understand the problem.

I've tried to convert to IR other models, like for image classification tasks and the tool works correctly. I think that all is installed fine, I've done it 5 or 6 times becaouse of this problem...

Can you help me?

I have to change something in config file? the node "Postprocessor/Cast" is not supported? I've to change something in training?

Do you need some other information?

Please, I have spent many hours on this project and I don't know what else I can do..

Best regards


0 Kudos
3 Replies

Please someone has some ideas about question up ?

0 Kudos

Hi Davide,

Model Optimizer only supports up to Tensorflow 1.12 today.

Also, we recommend using Tensorflow for CPU.

So you can try installing these out of conda in a separate python virtual environment. This is provided as an option while installing model optimizer > install_prerequisites >  install_prerequisites*.sh followed by venv. This is provided as an option while installing model optimizer > install_prerequisites >  install_prerequisites*.sh followed by venv.

0 Kudos

Hi Davide,

I also got a problem for converting my customized ssd model to IR.

Do you have found the solution about this issue?


0 Kudos