Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Nadh_V__Naveen
Beginner
438 Views

Model Optimizer Error: Exception occurred during running replacer

I was trying run mo_tf and got following error

 

Common parameters:
    - Path to the Input Model:     /home/naveen/openvino_models/./petfaces/frozen_inference_graph.pb
    - Path for generated IR:     /home/naveen/openvino_models/petfaces_IR_BGR
    - IR output name:     frozen_inference_graph
    - Log level:     ERROR
    - Batch:     Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:     Not specified, inherited from the model
    - Input shapes:     Not specified, inherited from the model
    - Mean values:     Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:     FP16
    - Enable fusing:     True
    - Enable grouped convolutions fusing:     True
    - Move mean values to preprocess section:     False
    - Reverse input channels:     True
TensorFlow specific parameters:
    - Input model in text protobuf format:     False
    - Offload unsupported operations:     False
    - Path to model dump for TensorBoard:     None
    - List of shared libraries with TensorFlow custom layers implementation:     None
    - Update the configuration file with input/output node names:     None
    - Use configuration file used to generate the model with Object Detection API:     /home/naveen/openvino_models/./petfaces/pipeline.config
    - Operations to offload:     None
    - Patterns to offload:     None
    - Use the config file:     /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/ssd_support.json
Model Optimizer version:     1.5.12.49d067a0
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
[ ERROR ]  -------------------------------------------------
[ ERROR ]  ----------------- INTERNAL ERROR ----------------
[ ERROR ]  Unexpected exception happened.
[ ERROR ]  Please contact Model Optimizer developers and forward the following information:
[ ERROR ]  Exception occurred during running replacer "ObjectDetectionAPISSDPostprocessorReplacement (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPISSDPostprocessorReplacement'>)": 
[ ERROR ]  Traceback (most recent call last):
  File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 114, in apply_replacements
    replacer.find_and_replace_pattern(graph)
  File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/front/tf/replacement.py", line 91, in find_and_replace_pattern
    self.replace_sub_graph(graph, match)
  File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/front/common/replacement.py", line 115, in replace_sub_graph
    new_sub_graph = self.generate_sub_graph(graph, match)  # pylint: disable=assignment-from-no-return
  File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/extensions/front/tf/ObjectDetectionAPI.py", line 899, in generate_sub_graph
    _relax_reshape_nodes(graph, pipeline_config)
  File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/extensions/front/tf/ObjectDetectionAPI.py", line 157, in _relax_reshape_nodes
    assert (old_reshape_node.op == 'Reshape')
AssertionError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/main.py", line 325, in main
    return driver(argv)
  File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/main.py", line 267, in driver
    mean_scale_values=mean_scale)
  File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/pipeline/tf.py", line 248, in tf2nx
    class_registration.apply_replacements(graph, class_registration.ClassType.FRONT_REPLACER)
  File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 127, in apply_replacements
    )) from err
Exception: Exception occurred during running replacer "ObjectDetectionAPISSDPostprocessorReplacement (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPISSDPostprocessorReplacement'>)": 

[ ERROR ]  ---------------- END OF BUG REPORT --------------
[ ERROR ]  -------------------------------------------------
 

 

Model optimizer parameters:

 

python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py --input_model <path tofrozen_inference_graph.pb> --tensorflow_use_custom_operations_config s<path to ssd_support.json> --data_type=FP16 --output_dir<path output dir> --tensorflow_object_detection_api_pipeline_config <path to pipeline.config> --reverse_input_channels

 

 

model: ssd_mobilenet_v1_0.75_depth_300x300_coco14_sync_2018_07_03 

0 Kudos
6 Replies
kao__mars
Beginner
438 Views

Hi,

i have the same error for convert model with network "mobilenet_ssd_v2".

....

[ ERROR ]  Exception occurred during running replacer "ObjectDetectionAPISSDPostprocessorReplacement (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPISSDPostprocessorReplacement'>)":
[ ERROR ]  Traceback (most recent call last):
  File "D:\Intel\computer_vision_sdk\deployment_tools\model_optimizer\mo\utils\class_registration.py", line 114, in apply_replacements
    replacer.find_and_replace_pattern(graph)
  File "D:\Intel\computer_vision_sdk\deployment_tools\model_optimizer\mo\front\tf\replacement.py", line 91, in find_and_replace_pattern
    self.replace_sub_graph(graph, match)
  File "D:\Intel\computer_vision_sdk\deployment_tools\model_optimizer\mo\front\common\replacement.py", line 115, in replace_sub_graph
    new_sub_graph = self.generate_sub_graph(graph, match)  # pylint: disable=assignment-from-no-return
  File "D:\Intel\computer_vision_sdk\deployment_tools\model_optimizer\extensions\front\tf\ObjectDetectionAPI.py", line 899, in generate_sub_graph
    _relax_reshape_nodes(graph, pipeline_config)
  File "D:\Intel\computer_vision_sdk\deployment_tools\model_optimizer\extensions\front\tf\ObjectDetectionAPI.py", line 157, in _relax_reshape_nodes
    assert (old_reshape_node.op == 'Reshape')
AssertionError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "D:\Intel\computer_vision_sdk\deployment_tools\model_optimizer\mo\main.py", line 325, in main
    return driver(argv)
  File "D:\Intel\computer_vision_sdk\deployment_tools\model_optimizer\mo\main.py", line 267, in driver
    mean_scale_values=mean_scale)
  File "D:\Intel\computer_vision_sdk\deployment_tools\model_optimizer\mo\pipeline\tf.py", line 248, in tf2nx
    class_registration.apply_replacements(graph, class_registration.ClassType.FRONT_REPLACER)
  File "D:\Intel\computer_vision_sdk\deployment_tools\model_optimizer\mo\utils\class_registration.py", line 127, in apply_replacements
    )) from err
Exception: Exception occurred during running replacer "ObjectDetectionAPISSDPostprocessorReplacement (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPISSDPostprocessorReplacement'>)":

[ ERROR ]  ---------------- END OF BUG REPORT --------------
[ ERROR ]  -------------------------------------------------

Have any suggestion for this issue?

Thanks.

Shubha_R_Intel
Employee
438 Views

This command worked perfectly fine for me and successfully produced IR:

python .\mo_tf.py --input_meta_graph C:\Intel\other-models\ssd_mobilenet_v1_0.75_depth_300x300_coco14_sync_2018_07_03.tar\ssd_mobilenet_v1_0.75_depth_300x300_coco14_sync_2018_07_03\ssd_mobilenet_v1_0.75_depth_300x300_coco14_sync_2018_07_03\model.ckpt.meta --log_level DEBUG --tensorflow_use_custom_operations_config C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer\extensions\front\tf\ssd_support.json --data_type=FP16 --tensorflow_object_detection_api_pipeline_config C:\Intel\other-models\ssd_mobilenet_v1_0.75_depth_300x300_coco14_sync_2018_07_03.tar\ssd_mobilenet_v1_0.75_depth_300x300_coco14_sync_2018_07_03\ssd_mobilenet_v1_0.75_depth_300x300_coco14_sync_2018_07_03\pipeline.config --reverse_input_channels

The major difference between your command and mine is that I'm doing --input_meta_graph and you're doing --input_model.

Thanks for using OpenVino !

Shubha

Olivero__Alberto
Beginner
438 Views

Dear Shubha,

I adopted your approach on tf model ssd_mobilenet_v1_coco_2018_01_28

With the following command I converted properly to the IR with no errors

sudo ./mo_tf.py --input_meta_graph ~/Scaricati/ssd_mobilenet_v1_coco_2018_01_28/model.ckpt.meta --tensorflow_object_detection_api_pipeline_config ~/Scaricati/ssd_mobilenet_v1_coco_2018_01_28/pipeline.config --tensorflow_use_custom_operations_config ~/intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/ssd_support.json --model_name ssd_mobilenet_OLI --output_dir ~/Scaricati/ssd_mobilenet_v1_coco_2018_01_28 --data_type FP16 --input_shape [1,300,300,3] --reverse_input_channels

The output is promising as below.

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: /home/alberto/Scaricati/ssd_mobilenet_v1_coco_2018_01_28/ssd_mobilenet_OLI.xml
[ SUCCESS ] BIN file: /home/alberto/Scaricati/ssd_mobilenet_v1_coco_2018_01_28/ssd_mobilenet_OLI.bin
[ SUCCESS ] Total execution time: 18.04 seconds.

 

At this stage I try to use this model doing inference with the following command

./object_detection_demo_ssd_async -i /dev/video0 -m ~/Scaricati/ssd_mobilenet_v1_coco_2018_01_28/ssd_mobilenet_OLI.xml -d MYRIAD

 

I get the following error

InferenceEngine:
    API version ............ 1.4
    Build .................. 19154
[ INFO ] Parsing input parameters
[ INFO ] Reading input

(object_detection_demo_ssd_async:18959): GStreamer-CRITICAL **: 00:59:01.431: gst_element_get_state: assertion 'GST_IS_ELEMENT (element)' failed
[ INFO ] Loading plugin

    API version ............ 1.5
    Build .................. 19154
    Description ....... myriadPlugin
[ INFO ] Loading network files
[ ERROR ] Error reading network: input must have dimensions

 

any idea how to fix this ? I also tested using as input an mp4 video, but the error is the same.

zhao__wang
Beginner
438 Views

Olivero, Alberto wrote:

Dear Shubha,

I adopted your approach on tf model ssd_mobilenet_v1_coco_2018_01_28

With the following command I converted properly to the IR with no errors

sudo ./mo_tf.py --input_meta_graph ~/Scaricati/ssd_mobilenet_v1_coco_2018_01_28/model.ckpt.meta --tensorflow_object_detection_api_pipeline_config ~/Scaricati/ssd_mobilenet_v1_coco_2018_01_28/pipeline.config --tensorflow_use_custom_operations_config ~/intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/ssd_support.json --model_name ssd_mobilenet_OLI --output_dir ~/Scaricati/ssd_mobilenet_v1_coco_2018_01_28 --data_type FP16 --input_shape [1,300,300,3] --reverse_input_channels

The output is promising as below.

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: /home/alberto/Scaricati/ssd_mobilenet_v1_coco_2018_01_28/ssd_mobilenet_OLI.xml
[ SUCCESS ] BIN file: /home/alberto/Scaricati/ssd_mobilenet_v1_coco_2018_01_28/ssd_mobilenet_OLI.bin
[ SUCCESS ] Total execution time: 18.04 seconds.

 

At this stage I try to use this model doing inference with the following command

./object_detection_demo_ssd_async -i /dev/video0 -m ~/Scaricati/ssd_mobilenet_v1_coco_2018_01_28/ssd_mobilenet_OLI.xml -d MYRIAD

 

I get the following error

InferenceEngine:
    API version ............ 1.4
    Build .................. 19154
[ INFO ] Parsing input parameters
[ INFO ] Reading input

(object_detection_demo_ssd_async:18959): GStreamer-CRITICAL **: 00:59:01.431: gst_element_get_state: assertion 'GST_IS_ELEMENT (element)' failed
[ INFO ] Loading plugin

    API version ............ 1.5
    Build .................. 19154
    Description ....... myriadPlugin
[ INFO ] Loading network files
[ ERROR ] Error reading network: input must have dimensions

 

any idea how to fix this ?

I got the same situation as you.

I have no idea what's going on. If you get some any progress, please share it with me.

Thanks.

nikos1
Valued Contributor I
438 Views

> GStreamer-CRITICAL **: 00:59:01.431: gst_element_get_state: assertion 'GST_IS_ELEMENT (element)' failed

This is a video capture opencv / gstreamer issue.

What is the output of  v4l2-ctl --list-devices ?

Is openvc openvino environment configured properly? What is your OS?

  

Olivero__Alberto
Beginner
438 Views

I SOLVED with 2 changes:

the solution is

  1. to add  as parameter in mo_tf.py the following
        --output="detection_boxes,detection_scores,num_detections"
     In this way the inference will work
  1. doing inference with ./object_detection_demo_ssd_async if input is a camera use "-i cam" and if it is a video use for instance "-i ~/Scaricati/videoplayback.mp4"

@nikos I was refering to the camera with the wrong command, just using "-i cam" it is working.

I have UBUNTU 18.04

many thanks for your question that drove me in the right direction.

 

 

Reply