Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6403 Discussions

Model Optimizer error: node has more than one outputs

Qianying_Z_Intel
Employee
744 Views

Hi,

I'm trying to convert mask-rcnn model with below command:

>> python3 mo_tf.py --input_model ~/frozen_inference_graph.pb --output=detection_boxes,detection_scores,num_detections,Reshape_16 --tensorflow_object_detection_api_pipeline_config ~/pipeline.config

An error is generated as:

ERROR: Node Preprocessor/map/while/ResizeToRange/unstack has more than one outputs. Provide output port explicit

Detailed info. :

Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      /opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mask_rcnn_resnet101_atrous_coco_2018_01_28/frozen_inference_graph.pb
        - Path for generated IR:        /opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/.
        - IR output name:       frozen_inference_graph
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        detection_boxes,detection_scores,num_detections,Reshape_16
        - Input shapes:         Not specified, inherited from the model
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Offload unsupported operations:       False
        - Path to model dump for TensorBoard:   None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  /opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mask_rcnn_resnet101_atrous_coco_2018_01_28/pipeline.config
        - Operations to offload:        None
        - Patterns to offload:  None
        - Use the config file:  None
Model Optimizer version:        1.2.185.5335e231

 

0 Kudos
2 Replies
Severine_H_Intel
Employee
744 Views

Hi Qianying, 

the correct MO command line for mask_rcnn is this one:

python3 mo_tf.py --input_model ~/frozen_inference_graph.pb --tensorflow_use_custom_operations_config extensions/front/tf/mask_rcnn_support.json --tensorflow_object_detection_api_pipeline_config ~/pipeline.config --reverse_input_channels 

In our latest release, we have changed the way to handle the models from the TF Object Detection API, I invite you to read the documentation about it: computer_vision_sdk_2018.3.343/deployment_tools/documentation/docs/TensorFlowObjectDetectionAPIModels.html

Best, 

Severine

0 Kudos
Palanivel_G_Intel
744 Views

Hi Severine,

I used this command, python3 mo_tf.py --input_model ~/frozen_inference_graph.pb --tensorflow_use_custom_operations_config extensions/front/tf/mask_rcnn_support.json --tensorflow_object_detection_api_pipeline_config ~/pipeline.config --reverse_input_channels

But during the inference stage, I'm getting this error "[ ERROR ] Error reading network: Incorrect crop data! Offset(2) + result size of output(100) should be less then input size(7) for axis(3) ". The error is same with both CPU & GPU plugins on Skylake.

I used resnet-101 based mask rcnn from TF model zoo (mask_rcnn_resnet101_atrous_coco in https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md). 

My OpenVino version is 2018.3.338. Please suggest how to resolve this issue.

Thanks,

Palanivel

 

 

 

 

 

0 Kudos
Reply