Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Cannot optimize SSD-MobileNet-v2

Cariaggi__Francesco
2,101 Views

Hello everyone,

I'm trying to convert the model SSD-MobileNet-v2-COCO (which I could find here: https://software.intel.com/en-us/articles/model-downloader-essentials) in order to be able to run it on my Neural Compute Stick 2. I've run the following command:

python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py --input_model /opt/intel/openvino/deployment_tools/tools/model_downloader/object_detection/common/ssd_mobilenet_v2_coco/tf/ssd_mobilenet_v2_coco.frozen.pb -o ClassificationModelsForNCS/SSD-MobileNet-v2-COCO --data_type FP16

But the program terminates with the following message: Illegal instruction (core dumped)

I honestly don't understand why, because I've downloaded the model using the model downloader that comes with OpenVINO, running:



python3 downloader.py --name ssd_mobilenet_v2_coco
0 Kudos
17 Replies
Shubha_R_Intel
Employee
2,101 Views

Dear Cariaggi, Francesco,

Are you using OpenVino 2019 R1 (the latest and greatest version ) ? If not please download it and try it.

Thanks,

Shubha

0 Kudos
Cariaggi__Francesco
2,101 Views

Yeah I'm sorry, forgot to mention that I was indeed using OpenVINO 2019 R1. Any idea about what could be causing my problem?

I'm posting the full output of the first command in my previous post:

Model Optimizer arguments:
Common parameters:
       - Path to the Input Model:      /opt/intel/openvino/deployment_tools/tools/model_downloader/obje
ct_detection/common/ssd_mobilenet_v2_coco/tf/ssd_mobilenet_v2_coco.frozen.pb
       - Path for generated IR:        /home/udoo/ClassificationModelsForNCS/SSD-MobileNet-v2-COCO
       - IR output name:       ssd_mobilenet_v2_coco.frozen
       - Log level:    ERROR
       - Batch:        Not specified, inherited from the model
       - Input layers:         Not specified, inherited from the model
       - Output layers:        Not specified, inherited from the model
       - Input shapes:         Not specified, inherited from the model
       - Mean values:  Not specified
       - Scale values:         Not specified
       - Scale factor:         Not specified
       - Precision of IR:      FP16
       - Enable fusing:        True
       - Enable grouped convolutions fusing:   True
       - Move mean values to preprocess section:       False
       - Reverse input channels:       False
TensorFlow specific parameters:
       - Input model in text protobuf format:  False
       - Path to model dump for TensorBoard:   None
       - List of shared libraries with TensorFlow custom layers implementation:        None
       - Update the configuration file with input/output node names:   None
       - Use configuration file used to generate the model with Object Detection API:  None
       - Operations to offload:        None
       - Patterns to offload:  None
       - Use the config file:  None
Model Optimizer version:        2019.1.0-341-gc9b66a2
Illegal instruction (core dumped)

 

0 Kudos
Shubha_R_Intel
Employee
2,101 Views

Dear Cariaggi, Francesco,

Indeed that is very strange. Can you leave off the --data_type FP16 ? Does the mo_tf.py still core dump ?

Thanks for upgrading to 2019 R1.

Please report back here and I promise to check in.

Shubha

 

0 Kudos
Shubha_R_Intel
Employee
2,101 Views

Dear Cariaggi, Francesco, 

A command similar to the below works perfectly (of course on Windows not Linux). Your issue has to do with the python interpreter in your environment. It's not related to OpenVino. Remember Python is itself a "c" program - so if a core dump happens when you use Python, there is something going on with your version of Python3.  All the model optimizer code are pure python scripts also.

>python mo_tf.py  --input_model "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\tools\model_downloader\object_detection\common\ssd_mobilenet_v2_coco\tf\ssd_mobilenet_v2_coco.frozen.pb" --tensorflow_use_custom_operations_config  "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\model_optimizer\extensions\front\tf\ssd_v2_support.json" --tensorflow_object_detection_api_pipeline_config  "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\tools\model_downloader\object_detection\common\ssd_mobilenet_v2_coco\tf\ssd_mobilenet_v2_coco.config" --data_type FP16 --log_level DEBUG

Hope it helps,

Thanks,

Shubha

0 Kudos
Cariaggi__Francesco
2,101 Views

Thanks for your support Shubha,

Unfortunately, leaving off "--data_type FP16" does not solve the problem. 

I've also tried running the command you posted (adapting it to Linux, of course), though I didn't pass the option "--tensorflow_object_detection_api_pipeline_config" because the file "ssd_mobilenet_v2_coco.config" was not included in the .tar archive containing the SSD-MobileNet-v2 model. I wasn't able to find it anywhere inside the OpenVINO installation directory, either.
Running the command, however, resulted in the same error: Illegal instruction (core dumped).

The weird thing is that optimizing a Caffe model (using mo_caffe.py) works smoothly, and so does running an optimized Caffe model on the NCS 2. Looks like there's a problem with TensorFlow models.

As for the Python interpreter, I'm using Python 3.5.2.

-------
EDIT:
-------
I discovered that starting python3 and trying to issue "import tensorflow"  results in the exact same error: Illegal instruction (core dumped).
I'm pretty confident that there's a problem with the version of TensorFlow that is installed on my machine (namely version 1.9.0). Just to be thorough, I've tried running: 

sudo  /opt/intel/openvino/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites_tf.sh

but that didn't help. Should I downgrade/upgrade TensorFlow to a specific version?

-------
EDIT 2:
-------
I managed to solve the problem by downgrading TensorFlow to version 1.5.0. According to some other forums online, TensorFlow > 1.5.0 pre-compiled binaries are compiled with support to AVX instructions, which are not supported by my CPU (which is an Intel Atom E3950).
I wonder if this was a good idea, though, because now I get a different error when trying to optimize the model SSD-MobileNet-v2. Here's the error log:

[ ERROR ]  Shape [-1 -1 -1  3] is not fully defined for output 0 of "image_tensor". Use --input_shape wi
th positive integers to override model input shapes.
[ ERROR ]  Cannot infer shapes or values for node "image_tensor".
[ ERROR ]  Not all output shapes were inferred or fully defined for node "image_tensor".  
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/
docs/MO_FAQ.html), question #40.  
[ ERROR ]   
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_placeholder_ext.<locals>
.<lambda> at 0x7fb065825c80>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.Partia
lInfer.PartialInfer'>): Stopped shape/value propagation at "image_tensor" node.  
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/
docs/MO_FAQ.html), question #38.

 

0 Kudos
Shubha_R_Intel
Employee
2,101 Views

Dear Cariaggi, Francesco,

Glad you solved your issue. Yes I knew it was not an OpenVino problem when you said "core dump" using mo tools. I'm surprised that the scripts under deployment_tools\model_optimizer\install_prerequisites (which you ran) didn't install a newer TF version. Because those prerequisites scripts would have ensured that you've got the correct version of Tensorflow. TF 1.5.0 is quite ancient.

As for your new problem  ERROR: Shape [-1 -1 -1  3] , what that is telling you is that your Batch Size, Width and Height cannot be -1. However 3 for number of channels is OK. So you need to pass in a valid set of 4 positive numbers into the mo.py via --input_shape . So as you can see from my command above I did not run into this error. Perhaps you forgot to add --tensorflow_use_custom_operations_config and --tensorflow_object_detection_api_pipeline_config   to your command ? Model Optimizer does not accept negative values for batch, height, width and channel number.

Thanks,

Shubha

0 Kudos
Cariaggi__Francesco
2,101 Views

I've tried passing "--input_shape [1,640,640,3]" since the input shape for the model is 640x640, though I'm not sure about the batch size. In any case, I get the following output, which is honestly overwhelming:

[ ERROR ]  Cannot infer shapes or values for node "Postprocessor/BatchMultiClassNonMaxSuppression/ones".
[ ERROR ]  NodeDef mentions attr 'index_type' not in Op<name=Fill; signature=dims:int32, value:T -> outp
ut:T; attr=T:type>; NodeDef: Postprocessor/BatchMultiClassNonMaxSuppression/ones = Fill[T=DT_INT32, inde
x_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_Postprocessor/BatchMultiCl
assNonMaxSuppression/ones/packed_port_0_ie_placeholder_0_1, _arg_Postprocessor/BatchMultiClassNonMaxSupp
ression/ones/Const_port_0_ie_placeholder_0_0). (Check whether your GraphDef-interpreting binary is up to
date with your GraphDef-generating binary.).
        [[Node: Postprocessor/BatchMultiClassNonMaxSuppression/ones = Fill[T=DT_INT32, index_type=DT_IN
T32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_Postprocessor/BatchMultiClassNonMaxSup
pression/ones/packed_port_0_ie_placeholder_0_1, _arg_Postprocessor/BatchMultiClassNonMaxSuppression/ones
/Const_port_0_ie_placeholder_0_0)]]

Caused by op 'Postprocessor/BatchMultiClassNonMaxSuppression/ones', defined at:
 File "/opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py", line 31, in <module>
   sys.exit(main(get_tf_cli_parser(), 'tf'))
 File "/opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/mo/main.py", line 312, in main
   return driver(argv)
 File "/opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/mo/main.py", line 263, in driver
   is_binary=not argv.input_model_is_text)
 File "/opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/mo/pipeline/tf.py", line 128, in
tf2nx
   class_registration.apply_replacements(graph, class_registration.ClassType.MIDDLE_REPLACER)
 File "/opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/mo/utils/class_registration.py",
line 167, in apply_replacements
   replacer.find_and_replace_pattern(graph)
 File "/opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/extensions/middle/PartialInfer.p
y", line 31, in find_and_replace_pattern
   partial_infer(graph)
 File "/opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/mo/middle/passes/infer.py", line
130, in partial_infer
   node.infer(node)
 File "/opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/mo/front/tf/partial_infer/tf.py"
, line 60, in tf_native_tf_node_infer
   tf_subgraph_infer(tmp_node)
 File "/opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/mo/front/tf/partial_infer/tf.py"
, line 135, in tf_subgraph_infer
   all_constants, output_tensors = get_subgraph_output_tensors(node)
 File "/opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/mo/front/tf/partial_infer/tf.py"
, line 115, in get_subgraph_output_tensors
   tf.import_graph_def(graph_def, name='')
 File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/deprecation.py", line 316, in new_
func
   return func(*args, **kwargs)
 File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/importer.py", line 554, in im
port_graph_def
   op_def=op_def)
 File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 3160, in create
_op
   op_def=op_def)
 File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1625, in __init
__
   self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

InvalidArgumentError (see above for traceback): NodeDef mentions attr 'index_type' not in Op<name=Fill;
signature=dims:int32, value:T -> output:T; attr=T:type>; NodeDef: Postprocessor/BatchMultiClassNonMaxSup
pression/ones = Fill[T=DT_INT32, index_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CP
U:0"](_arg_Postprocessor/BatchMultiClassNonMaxSuppression/ones/packed_port_0_ie_placeholder_0_1, _arg_Po
stprocessor/BatchMultiClassNonMaxSuppression/ones/Const_port_0_ie_placeholder_0_0). (Check whether your
GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
        [[Node: Postprocessor/BatchMultiClassNonMaxSuppression/ones = Fill[T=DT_INT32, index_type=DT_IN
T32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_Postprocessor/BatchMultiClassNonMaxSup
pression/ones/packed_port_0_ie_placeholder_0_1, _arg_Postprocessor/BatchMultiClassNonMaxSuppression/ones
/Const_port_0_ie_placeholder_0_0)]]

[ ERROR ]   
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_native_tf_node_infer at
0x7f0e176c8620>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.Partia
lInfer.PartialInfer'>): Stopped shape/value propagation at "Postprocessor/BatchMultiClassNonMaxSuppressi
on/ones" node.  
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/
docs/MO_FAQ.html), question #38.

I don't know what to do at this point, guess I'll have to give up if you don't have further suggestions

0 Kudos
Shubha_R_Intel
Employee
2,101 Views

Dear Cariaggi, Francesco,

I totally understand how Model Optimizer errors can be overwhelming ! But I gave you a perfectly working mo_tf.py command above. I will give it to you here again (you don't need to pass in --input_shape ) :

python mo_tf.py  --input_model "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\tools\model_downloader\object_detection\common\ssd_mobilenet_v2_coco\tf\ssd_mobilenet_v2_coco.frozen.pb" --tensorflow_use_custom_operations_config  "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\model_optimizer\extensions\front\tf\ssd_v2_support.json" --tensorflow_object_detection_api_pipeline_config  "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\tools\model_downloader\object_detection\common\ssd_mobilenet_v2_coco\tf\ssd_mobilenet_v2_coco.config" --data_type FP16 --log_level DEBUG

Please study the following document.

https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html

Thanks,

Shubha

0 Kudos
Cariaggi__Francesco
2,101 Views

Thank you Shubha, the link you provided was extremely helpful. I've tried your command and, surprisingly, it finally worked! Before that, however, I had to install TensorFlow 1.12.0 by compiling it from sources, as there was no other way to do that (official pre-compiled binaries of TensorFlow > 1.5.0 are not supported by my old CPU). I had to do that since, as stated here, the models I downloaded were frozen using TensorFlow 1.12.0 and were not guaranteed to work with a different version.

Despite all of this, I'm still left with a concern. The ".json" and ".config" files I passed to "mo_tf.py" were NOT located in the same place. The ".json" file was inside the OpenVINO 2019 R1 installation directory, while the ".config" file I had to download from the model zoo. That's pretty weird, isn't it? I mean, considering that "mo_tf.py" won't work if any of the two files is missing, why putting them in two separate locations?

Finally, if I ever need to optimize a model I trained by myself using TensorFlow, how am I supposed to generate those ".json" and ".config" files? 

0 Kudos
Shubha_R_Intel
Employee
2,101 Views

Dear Cariaggi, Francesco:

I'm glad that I've been able to help you and that you've been successful ! 

Despite all of this, I'm still left with a concern. The ".json" and ".config" files I passed to "mo_tf.py" were NOT located in the same place. The ".json" file was inside the OpenVINO 2019 R1 installation directory, while the ".config" file I had to download from the model zoo. That's pretty weird, isn't it? I mean, considering that "mo_tf.py" won't work if any of the two files is missing, why putting them in two separate locations?

Your concern is valid. But you have to understand, the *.json belongs to OpenVino while the *.config is part of Tensorflow Object Detection Models. OpenVino makes these various *.json files only to get Model Optimizer to seamlessly work with the various models (mostly Tensorflow) out in the wild. So what I'm trying to tell you is that the *.config files do not belong to OpenVino at all. 

As for this question:

Finally, if I ever need to optimize a model I trained by myself using TensorFlow, how am I supposed to generate those ".json" and ".config" files? 

As long as you don't modify the structure of Tensorflow model, then the pre-packaged *.json files should work OK. But yes, if you somehow modify those models, then the *.json may not work.  We'd have to investigate those exceptions on a case by case basis, but the short answer is - debug the mo code (it's open source so why not ?) and understand how the *.json works. Then add/subtract as needed. Training doesn't change the structure of the model, it only adds biases and weights. So if you don't add/remove layers or ops, then you should be OK with the pre-packaged *.json files which come with OpenVino.

As for the *.config files, hopefully the following stack overflow discussion will help you (as you can see, the guy who asks the question does not use OpenVino, since those *.config have nothing to do with OpenVino !)

https://stackoverflow.com/questions/49148962/tensorflow-object-detection-config-files-documentation

Thanks,

Shubha

 

 

0 Kudos
Jinyang_H_Intel
Employee
2,101 Views

Dear Shubha:

My situation is similar to Cariaggi, Francesco.

When i use the mo_tf.py to convert the .pd file training by Tensorflow object detection api (Tensorflow==1.5.0 python2.7),everything is OK.The OpenVINO on my computer is latest.

But,if i convert the .pd file training by Tensorflow object detection api(Tensorflow==1.12.0 python2.7), I get the error like this:

[ 2019-06-11 22:41:03,589 ] [ DEBUG ] [ infer:127 ]  --------------------
[ 2019-06-11 22:41:03,589 ] [ DEBUG ] [ infer:128 ]  Partial infer for Postprocessor/Cast
[ 2019-06-11 22:41:03,589 ] [ DEBUG ] [ infer:129 ]  Op: Cast
[ ERROR ]  Cannot infer shapes or values for node "Postprocessor/Cast".
[ ERROR ]  0
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function Cast.infer at 0x7feccb18f730>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ 2019-06-11 22:41:03,591 ] [ DEBUG ] [ infer:194 ]  Node "Postprocessor/Cast" attributes: {'pb': name: "Postprocessor/Cast"
op: "Cast"
input: "Preprocessor/map/TensorArrayStack_1/TensorArrayGatherV3"
attr {
  key: "DstT"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "SrcT"
  value {
    type: DT_INT32
  }
}
attr {
  key: "Truncate"
  value {
    b: false
  }
}
, '_in_ports': {0}, 'is_const_producer': False, 'in_ports_count': 1, 'kind': 'op', 'out_ports_count': 1, 'shape_attrs': ['pad', 'output_shape', 'window', 'stride', 'shape'], 'infer': <function Cast.infer at 0x7feccb18f730>, 'precision': 'FP32', 'op': 'Cast', '_out_ports': {0}, 'is_partial_inferred': False, 'is_output_reachable': True, 'name': 'Postprocessor/Cast', 'dst_type': <class 'numpy.float32'>, 'is_undead': False, 'dim_attrs': ['axis', 'batch_dims', 'spatial_dims', 'channel_dims'], 'IE': [('layer', [('id', <function Op.substitute_ie_attrs.<locals>.<lambda> at 0x7fecca2fd598>), 'name', 'precision', 'type'], [('data', [], []), '@ports', '@consts'])]}
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "Postprocessor/Cast" node.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.
[ 2019-06-11 22:41:03,592 ] [ DEBUG ] [ main:318 ]  Traceback (most recent call last):
  File "/home/yang/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/middle/passes/infer.py", line 130, in partial_infer
    node.infer(node)
  File "/home/yang/intel/openvino_2019.1.144/deployment_tools/model_optimizer/extensions/ops/Cast.py", line 40, in infer
    copy_shape_infer(node, lambda n: n.in_node().value.astype(n.dst_type))
  File "/home/yang/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/front/common/partial_infer/elemental.py", line 33, in copy_shape_infer
    single_output_infer(node, lambda n: n.in_node().shape, value_infer)
  File "/home/yang/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/front/common/partial_infer/elemental.py", line 19, in single_output_infer
    node.out_node(0).shape = shape_infer(node)
  File "/home/yang/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/front/common/partial_infer/elemental.py", line 33, in <lambda>
    single_output_infer(node, lambda n: n.in_node().shape, value_infer)
  File "/home/yang/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/graph/graph.py", line 139, in in_node
    return self.in_nodes(control_flow=control_flow)[key]
KeyError: 0

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/yang/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 167, in apply_replacements
    replacer.find_and_replace_pattern(graph)
  File "/home/yang/intel/openvino_2019.1.144/deployment_tools/model_optimizer/extensions/middle/PartialInfer.py", line 31, in find_and_replace_pattern
    partial_infer(graph)
  File "/home/yang/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/middle/passes/infer.py", line 196, in partial_infer
    refer_to_faq_msg(38)) from err
mo.utils.error.Error: Stopped shape/value propagation at "Postprocessor/Cast" node.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/yang/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/main.py", line 312, in main
    return driver(argv)
  File "/home/yang/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/main.py", line 263, in driver
    is_binary=not argv.input_model_is_text)
  File "/home/yang/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/pipeline/tf.py", line 128, in tf2nx
    class_registration.apply_replacements(graph, class_registration.ClassType.MIDDLE_REPLACER)
  File "/home/yang/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 184, in apply_replacements
    )) from err
mo.utils.error.Error: Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "Postprocessor/Cast" node.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.

WHY?It is really con

Thanks,

Jinyang

0 Kudos
Shubha_R_Intel
Employee
2,101 Views

Dear He, Jinyang ,

First, please give me your exact mo_tf.py command. Also, why not pick a supported and tested model from The Supported List ?  Use Tensorflow Object Detection API Model Optimizer Conversion steps to convert SSD since SSD is in the TF Object Detection Family. I have done this many times. it should work for you.

Please report your status on this forum.

Thanks !

Shubha

0 Kudos
He__Jinyang1
Beginner
2,101 Views

Dear Shubha,

My command is like this:

python3 /home/yang/intel/openvino/deployment_tools/model_optimizer/mo_tf.py \ --input_model \ /home/share/models/research/object_detection/train/export_dsm_focal/frozen_inference_graph.pb \ --tensorflow_use_custom_operations_config /home/yang/intel/openvino/depl oyment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json \ --tensorflow_object_detection_api_pipeline_config /home/share/models/res earch/object_detection/train/export_dsm_focal/pipeline.config 

Everything is OK when i use the .pb file obtaining by TF Object Detection API(TF==1.5.0 py2.7).

When i try to convert the .pd file obtaining by TF Object Detection API(TF==1.12.0 py2.7),the above error occured.

Does Model Optimizer have a limit to the version of TF Object Detection API?

Thanks,

Jinyang

0 Kudos
Jinyang_H_Intel
Employee
2,101 Views
Dear Shubha, Sorry,my command is actually like this: python3 /home/yang/intel/openvino/deployment_tools/model_optimizer/mo_tf.py --input_model /home/share/models/research/object_detection/train/export_dsm_focal/frozen_inference_graph.pb --tensorflow_use_custom_operations_config /home/yang/intel/openvino/depl oyment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config /home/share/models/res earch/object_detection/train/export_dsm_focal/pipeline.config
0 Kudos
Shubha_R_Intel
Employee
2,101 Views

Dear He, Jinyang,

Lately the Tensorflow API Object Detection APIs have changed considerably and yes, certain versions are known to not work with the *.json files under deployment_tools/model_optimizer/extensions/front/tf/.

For TF 1.12 please do the following : Change line 57 from      "Postprocessor/ToFloat" to "Postprocessor/Cast" in your ssd_v2_support.json and it should work.

Sorry about the trouble. Let me know if this works for you !

Thanks

Shubha

0 Kudos
Jinyang_H_Intel
Employee
2,101 Views

Dear Shuhba,

Thank you for your help and patience,it works successfully!

0 Kudos
Shubha_R_Intel
Employee
2,101 Views

Dear He, Jinyang ,

I'm so happy to hear it. And thanks for sharing back to the OpenVino community about your success !

Shubha

0 Kudos
Reply