Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

OpenVINO : faster rcnn conversion errors

Mardoc__Emile
Beginner
806 Views

Hi everyone,

I am just a beginner with Movidius and need some help concerning models conversion.

Indeed, I am an intern in a French company and have two models (ssd and faster rcnn) which work well on Ubuntu. I can detect objects with my own pictures dataset, videos and camera. I would like now to use them on a Raspberry with a Movidius key (NCS1 or NCS2).

I tried to convert my models with ncsdk1 and ncsdk2 but it didn't work, so I tried with OpenVINO.

 

My two questions are:

1) Can you please help me to solve the errors below ?

2) What is the easiest way to use mo_tf.py outputs to test inference, and then to detect objects in real time using a camera ?

 

 

 

By using the following command line, I get three new files (frozen_inference_graph.bin, frozen_inference_graph.mapping, frozen_inference_graph.xml):

sudo ./mo_tf.py --input_model my_ssd_path/frozen_inference_graph.pb --tensorflow_object_detection_api_pipeline_config my_ssd_path/pipeline.config --tensorflow_use_custom_operations_config extensions/front/tf/ssd_v2_support.json --output="detection_boxes,detection_scores,num_detections" --output_dir ./svg_ssd --reverse_input_channels

 

 

 

However, I am unable to get the same files for my faster rcnn model by using the following line:

sudo ./mo_tf.py --input_model my_faster_path/frozen_inference_graph.pb --tensorflow_object_detection_api_pipeline_config my_faster_path/pipeline.config  --output="Softmax,SecondStagePostprocessor/Softmax" --output_dir ./svg_faster --tensorflow_use_custom_operations_config ./extensions/front/tf/faster_rcnn_support_api_v1.7.json

I tried with all faster rcnn json files, and I always get the line below:

Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:     my_faster_path/frozen_inference_graph.pb
    - Path for generated IR:     /opt/intel/openvino_2019.2.242/deployment_tools/model_optimizer/./svg_faster
    - IR output name:     frozen_inference_graph
    - Log level:     ERROR
    - Batch:     Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:     Softmax,SecondStagePostprocessor/Softmax
    - Input shapes:     Not specified, inherited from the model
    - Mean values:     Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:     FP32
    - Enable fusing:     True
    - Enable grouped convolutions fusing:     True
    - Move mean values to preprocess section:     False
    - Reverse input channels:     False
TensorFlow specific parameters:
    - Input model in text protobuf format:     False
    - Path to model dump for TensorBoard:     None
    - List of shared libraries with TensorFlow custom layers implementation:     None
    - Update the configuration file with input/output node names:     None
    - Use configuration file used to generate the model with Object Detection API:     my_faster_path/pipeline.config
    - Operations to offload:     None
    - Patterns to offload:     None
    - Use the config file:     /opt/intel/openvino_2019.2.242/deployment_tools/model_optimizer/./extensions/front/tf/faster_rcnn_support_api_v1.7.json
Model Optimizer version:     2019.2.0-436-gf5827d4
~/.local/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:523: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
~.local/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:524: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
~/.local/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
~/.local/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
~/.local/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
~/.local/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
[ ERROR ]  Failed to match nodes from custom replacement description with id 'ObjectDetectionAPIProposalReplacement':
It means model and custom replacement description are incompatible.
Try to correct custom replacement description according to documentation with respect to model node names
[ ERROR ]  -------------------------------------------------
[ ERROR ]  ----------------- INTERNAL ERROR ----------------
[ ERROR ]  Unexpected exception happened.
[ ERROR ]  Please contact Model Optimizer developers and forward the following information:
[ ERROR ]  Exception occurred during running replacer "ObjectDetectionAPIDetectionOutputReplacement (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPIDetectionOutputReplacement'>)": 0
[ ERROR ]  Traceback (most recent call last):
  File "/opt/intel/openvino_2019.2.242/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 273, in apply_replacements
    for_graph_and_each_sub_graph_recursively(graph, replacer.find_and_replace_pattern)
  File "/opt/intel/openvino_2019.2.242/deployment_tools/model_optimizer/mo/middle/pattern_match.py", line 58, in for_graph_and_each_sub_graph_recursively
    func(graph)
  File "/opt/intel/openvino_2019.2.242/deployment_tools/model_optimizer/mo/front/tf/replacement.py", line 95, in find_and_replace_pattern
    self.replace_sub_graph(graph, match)
  File "/opt/intel/openvino_2019.2.242/deployment_tools/model_optimizer/mo/front/common/replacement.py", line 140, in replace_sub_graph
    new_sub_graph = self.generate_sub_graph(graph, match)  # pylint: disable=assignment-from-no-return
  File "/opt/intel/openvino_2019.2.242/deployment_tools/model_optimizer/extensions/front/tf/ObjectDetectionAPI.py", line 525, in generate_sub_graph
    current_node = skip_nodes_by_condition(match.single_input_node(0)[0].in_node(0),
  File "/opt/intel/openvino_2019.2.242/deployment_tools/model_optimizer/mo/front/subgraph_matcher.py", line 115, in single_input_node
    input_nodes = self.input_nodes(port)
  File "/opt/intel/openvino_2019.2.242/deployment_tools/model_optimizer/mo/front/subgraph_matcher.py", line 105, in input_nodes
    return self._input_nodes_map[port]
KeyError: 0

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/intel/openvino_2019.2.242/deployment_tools/model_optimizer/mo/main.py", line 302, in main
    return driver(argv)
  File "/opt/intel/openvino_2019.2.242/deployment_tools/model_optimizer/mo/main.py", line 251, in driver
    is_binary=not argv.input_model_is_text)
  File "/opt/intel/openvino_2019.2.242/deployment_tools/model_optimizer/mo/pipeline/tf.py", line 133, in tf2nx
    class_registration.apply_replacements(graph, class_registration.ClassType.FRONT_REPLACER)
  File "/opt/intel/openvino_2019.2.242/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 299, in apply_replacements
    )) from err
Exception: Exception occurred during running replacer "ObjectDetectionAPIDetectionOutputReplacement (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPIDetectionOutputReplacement'>)": 0

[ ERROR ]  ---------------- END OF BUG REPORT --------------
[ ERROR ]  -------------------------------------------------

 

 

 

Moreover, I tried without json file. In this case, I get a shape error:

Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:     my_faster_path/frozen_inference_graph.pb
    - Path for generated IR:     /opt/intel/openvino_2019.2.242/deployment_tools/model_optimizer/./svg_faster
    - IR output name:     frozen_inference_graph
    - Log level:     ERROR
    - Batch:     Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:     Softmax,SecondStagePostprocessor/Softmax
    - Input shapes:     Not specified, inherited from the model
    - Mean values:     Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:     FP32
    - Enable fusing:     True
    - Enable grouped convolutions fusing:     True
    - Move mean values to preprocess section:     False
    - Reverse input channels:     False
TensorFlow specific parameters:
    - Input model in text protobuf format:     False
    - Path to model dump for TensorBoard:     None
    - List of shared libraries with TensorFlow custom layers implementation:     None
    - Update the configuration file with input/output node names:     None
    - Use configuration file used to generate the model with Object Detection API:     my_faster_path/pipeline.config
    - Operations to offload:     None
    - Patterns to offload:     None
    - Use the config file:     None
Model Optimizer version:     2019.2.0-436-gf5827d4
~/.local/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:523: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
~/.local/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:524: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
~/.local/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
~/.local/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
~/.local/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
~/.local/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
[ ERROR ]  Shape [-1 -1 -1  3] is not fully defined for output 0 of "image_tensor". Use --input_shape with positive integers to override model input shapes.
[ ERROR ]  Cannot infer shapes or values for node "image_tensor".
[ ERROR ]  Not all output shapes were inferred or fully defined for node "image_tensor".
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #40.
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function Parameter.__init__.<locals>.<lambda> at 0x7f2aa5487d08>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "image_tensor" node.
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.

 

 


Finally, by adding " --input_shape [1,600,600,3], I get:

Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:     my_faster_path/frozen_inference_graph.pb
    - Path for generated IR:     /opt/intel/openvino_2019.2.242/deployment_tools/model_optimizer/./svg_faster
    - IR output name:     frozen_inference_graph
    - Log level:     ERROR
    - Batch:     Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:     Softmax,SecondStagePostprocessor/Softmax
    - Input shapes:     [1,600,600,3]
    - Mean values:     Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:     FP32
    - Enable fusing:     True
    - Enable grouped convolutions fusing:     True
    - Move mean values to preprocess section:     False
    - Reverse input channels:     False
TensorFlow specific parameters:
    - Input model in text protobuf format:     False
    - Path to model dump for TensorBoard:     None
    - List of shared libraries with TensorFlow custom layers implementation:     None
    - Update the configuration file with input/output node names:     None
    - Use configuration file used to generate the model with Object Detection API:     my_faster_path/pipeline.config
    - Operations to offload:     None
    - Patterns to offload:     None
    - Use the config file:     None
Model Optimizer version:     2019.2.0-436-gf5827d4
~/.local/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:523: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
~/.local/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:524: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
~/.local/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
~/.local/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
~/.local/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
~/.local/lib/python3.5/site-packages/tensorflow/python/framework/dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
[ ERROR ]  Shape is not defined for output 0 of "BatchMultiClassNonMaxSuppression/map/while/Slice_1".
[ ERROR ]  Cannot infer shapes or values for node "BatchMultiClassNonMaxSuppression/map/while/Slice_1".
[ ERROR ]  Not all output shapes were inferred or fully defined for node "BatchMultiClassNonMaxSuppression/map/while/Slice_1".
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #40.
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function Slice.infer at 0x7fe8b0f11510>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "BatchMultiClassNonMaxSuppression/map/while/Slice_1" node.
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.

 

 

 

Please help me to:

1) Solve my error with faster rcnn

2) Tell me what to do with my three output files. I would like an easy way to make inference with them and my own labelled pictures, then with a camera. For instance, with Ubuntu, I use Tensorflow and a script which only needs the model and the pictures (or video or camera) in input, and outputs are values of detections in a terminal and pictures with anchors in a specific window.

Thank you very much
(if you need more information, it would probably be easier to create a private message)

0 Kudos
3 Replies
JesusE_Intel
Moderator
806 Views

Hi Emile,

Could you try to convert your model with the command from the OpenVINO Documentation regarding Faster R-CNN conversion?

./mo.py --input_model=<path_to_frozen.pb> --output=detection_boxes,detection_scores,num_detections --tensorflow_use_custom_operations_config extensions/front/tf/faster_rcnn_support_api_v1.xx.json

Note: The api json file needs to match the version of the TensorFlow API used when train your model.

You can use the .xml and .bin files to inference with one of our sample demos. Take a look the object_detection_demo_faster_rcnn and object_detection_demo_ssd_async on the Open Model Zoo GitHub

Regards,

Jesus

0 Kudos
Mardoc__Emile
Beginner
806 Views

Hi Jesus,

Thank you very much for your quick answer, I am now able to get output files !

 

The problem was that I used "--output='Softmax,SecondStagePostprocessor/Softmax'" instead of "--output=detection_boxes,detection_scores,num_detections".

My final command is:

sudo ./mo_tf.py --input_model <my_faster_path>/frozen_inference_graph.pb --tensorflow_object_detection_api_pipeline_config <my_faster_path>/pipeline.config  --output=detection_boxes,detection_scores,num_detections --output_dir <my_output_path> --tensorflow_use_custom_operations_config extensions/front/tf/faster_rcnn_support.json

 

I will now try to use these new files to detect objects by following the Open Model Zoo GitHub.

 

Best regards,

Emile

0 Kudos
JesusE_Intel
Moderator
806 Views

Hi Emile,

Thank you for confirming, I'm glad you were able to convert your model!

Regards,

Jesus

0 Kudos
Reply