Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6386 Discussions

failed to convert tensorflow ssd mobilnetv1 coco model with OPENVINO

abhi1
Beginner
3,671 Views
Error :   [ ERROR ] Shape [-1 -1 -1 3] is not fully defined for output 0 of "image_tensor". Use --input_shape with positive integers to override model input shapes. [ ERROR ] Cannot infer shapes or values for node "image_tensor". [ ERROR ] Not all output shapes were inferred or fully defined for node "image_tensor". For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40. [ ERROR ] [ ERROR ] It can happen due to bug in custom shape infer function <function tf_placeholder_ext.<locals>.<lambda> at 0x000001F217D4B1E0>. [ ERROR ] Or because the node inputs have incorrect values/shapes. [ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape). [ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information. [ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "image_tensor" node. For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.

I have trained the model on my own labelled dataset....so its' modified mobilenet

 

0 Kudos
1 Solution
JesusE_Intel
Moderator
3,104 Views

Hi abhi,

 

The issue may be with your code used to inference, did you start from scratch or based it off one of our sample apps?

 

I was able to get detect the person in your image using one of our sample demos. To build the samples, run the build_sample.sh scrip in the <openvino>/inference_engine/samples directory.

 

out_0.bmp

I used the following command to run your model and image with the sample app.

Python3 ~/inference_engine_samples_build/intel64/Release/object_detection_sample_ssd -m frozen_inference_graph.xml -i image.jpeg -d MYRIAD  

Regards,

Jesus

 

View solution in original post

0 Kudos
24 Replies
JesusE_Intel
Moderator
2,804 Views

Hi abhi,

 

Could you please share additional information?

  • Please provide the model optimizer command that you used.
  • What version of the OpenVINO toolkit are you using?
  • Could you provide your model for me to replicate? You can share it with me via private message (click my username, on my profile click send message and attach the model)
  • What version of Tensorflow did you use to train the model?

 

Regards,

Jesus

0 Kudos
abhi1
Beginner
2,804 Views

@Intel_Jesus​ I have send you a private message with all the details

0 Kudos
abhi1
Beginner
2,804 Views

 

@Intel_Jesus​ 

0 Kudos
JesusE_Intel
Moderator
2,804 Views

Hi abhi,

 

Thank you for sending your model over. I was not able to convert it using the model optimizer. I am not familiar with the tutorial and have not gone though it myself. However, the tutorial mentions using faster_rcnn_resnet101_coco as the base model prior to training. Which model did you use?

 

Please update your OpenVINO toolkit to the latest release (2019 R2) that was released today. Using the latest release, I converted the faster_rcnn_resnet101_coco model with the following command:

python3 ~/intel/openvino/deployment_tools/model_optimizer/mo_tf.py --input_model frozen_inference_graph.pb --tensorflow_use_custom_operations_config ~/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support.json --tensorflow_object_detection_api_pipeline_config pipeline.config -b 1 --data_type FP16

Could you try and see if you are able to convert the model? For your custom trained model, you would need to use the faster_rcnn_support_api_1.1x.json to match your version.

-- tensorflow_use_custom_operations_config ~/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support_api_v1.x.json

Also, try to freeze your model with the steps mentioned in the following document:

https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#freeze-the-tensorflow-model

 

If you get any errors, please add the --log_level=DEBUG to your mo_tf command and send me the following information.

  • New frozen model with the steps above
  • pipeline.config file
  • command used
  • full log file of the mo_tf command

 

Regards,

Jesus

0 Kudos
abhi1
Beginner
2,804 Views

@Intel_Jesus​ I have used the model I wrote in question. This model :http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_coco_2018_01_28.tar.gz

 

I've compiled the model with TensorFlow version 1.13(just in case). It seems that tf version matching matters a lot. Thanks to @Aroop_at_Intel​ for that

 

Also you used ~/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support.json

 

for mobiletnet ssd v1 coco what should I use? tf/ssd_support.json????

 

I'm retraining model zoo pre-trained models & retaining for custom object detection....Can you please pick the model I gave you....tar.gz & train that? See if that works? If that works...custom trained should also work

 

0 Kudos
JesusE_Intel
Moderator
2,804 Views

Hi abhi,

 

The ssd_support.json is only for frozen SSD topologies from the models zoo. Since you trained your own network, you will need to use ssd_support_api_v1.14.json found in the ~/intel/openvino/deployment_tools/model_optimizer/extension/front/tf directory.

 

I was able to convert your frozen model using the OpenVINO toolkit R2 2019 using the following command:

python3 ~/intel/openvino/deployment_tools/model_optimizer/mo_tf.py --input_model frozen_inference_graph.pb --tensorflow_use_custom_operations_config ~/intel/openvino/deployment_tools/model_optimizer/extension/front/tf/ssd_support_api_v1.14.json --tensorflow_object_detection_api_pipeline_config pipeline.config -b 1 --data_type FP16 --reverse_input_channels

Regards,

Jesus

0 Kudos
abhi1
Beginner
2,804 Views

will try it & update you if anything fires off

0 Kudos
abhi1
Beginner
2,804 Views

I get this error @Intel_Jesus​ what pipeline.config are you using. I didn't share you mine

The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept. [ ERROR ] Cannot infer shapes or values for node "BoxPredictor_0/ClassPredictor/BiasAdd/Reshape". [ ERROR ] Number of elements in input [ 1 19 19 6] and output [1, 23, 91] of reshape node BoxPredictor_0/ClassPredictor/BiasAdd/Reshape mismatch [ ERROR ] [ ERROR ] It can happen due to bug in custom shape infer function <function Reshape.infer at 0x7f9117177158>. [ ERROR ] Or because the node inputs have incorrect values/shapes. [ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape). [ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information. [ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "BoxPredictor_0/ClassPredictor/BiasAdd/Reshape" node. For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.

 

 

 

0 Kudos
JesusE_Intel
Moderator
2,804 Views

I used the pipeline.config file included in the pre-trained model you used and changed the num_classes to match your model (num_classes: 1).

http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_coco_2018_01_28.tar.gz

 

Regards,

Jesus

0 Kudos
abhi1
Beginner
2,804 Views

Thanks a lot @Intel_Jesus​ it generated the IR files. Thanks for continuous help & exceptional support

0 Kudos
abhi1
Beginner
2,804 Views

2.jpg@Intel_Jesus​ the inference is not working ...I used this file for inference (openvino r1) installed on raspberry pi for inference....

 

Can you share me something about inference on how do I check that?

 

This is the file I used for inference. The class is person

 

https://drive.google.com/file/d/1A0SKwxoM4mD9zdqvQ5VCcYkE766vBC8y/view?usp=sharing

0 Kudos
abhi1
Beginner
2,804 Views

that's the image I'm using for detection..person class

0 Kudos
JesusE_Intel
Moderator
3,105 Views

Hi abhi,

 

The issue may be with your code used to inference, did you start from scratch or based it off one of our sample apps?

 

I was able to get detect the person in your image using one of our sample demos. To build the samples, run the build_sample.sh scrip in the <openvino>/inference_engine/samples directory.

 

out_0.bmp

I used the following command to run your model and image with the sample app.

Python3 ~/inference_engine_samples_build/intel64/Release/object_detection_sample_ssd -m frozen_inference_graph.xml -i image.jpeg -d MYRIAD  

Regards,

Jesus

 

0 Kudos
abhi1
Beginner
2,804 Views

what's the process for raspberry PI movidius webcam inferencing?

0 Kudos
abhi1
Beginner
2,804 Views

python3 object_detection_sample_ssd -m frozen_inference_graph.xml -i image.jpeg -d MYRIAD

 File "object_detection_sample_ssd", line 1

SyntaxError: Non-UTF-8 code starting with '\xcf' in file object_detection_sample_ssd on line 1, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details

 

My python version is 3.7.3. Is that causing the problem? Looks like something has changed in pep standards

0 Kudos
abhi1
Beginner
2,804 Views

should I downgrade and rebuild?

0 Kudos
abhi1
Beginner
2,804 Views

Also @Intel_Jesus​ I have just converted a YOLO(Darknet) -> Tensorflow -> OpenVino(XML, BIN, etc)

 

but what's in for detection? object_detection_sample_ssd is for ssd. What do I use for detection on YOLO & end thing is deploying it on NCS2 + Raspberry...& detect with WebCAM continously

 

So please help me with that also. Like explain me further steps. Any link or anything

 

0 Kudos
JesusE_Intel
Moderator
2,804 Views

Hi Abhi,

 

This discussion is getting too long and deriving from the original question. Could you please open a new discussion for the YOLO (darknet) question.

 

The latest release of OpenVINO R2 2019 for the Raspberry Pi does not have the samples for object detection ssd. I was able to run your model on R2 2019 using my Raspberry Pi using the next steps.

 

  1. Install R2 2019 on the Raspberry Pi following the instructions here: https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_openvino_raspbian.html
  2. Download the R1.1 OpenVINO package, extract and run the sample with your model using the following steps.
cd ~/Downloads wget https://download.01.org/opencv/2019/openvinotoolkit/R1/l_openvino_toolkit_raspbi_p_2019.1.144.tgz tar -xf l_openvino_toolkit_raspbi_p_2019.1.144.tgz cd inference_engine_vpu_arm/inference_engine/samples/python_samples/object_detection_demo_ssd_async python3 object_detection_demo_ssd_async.py -m frozen_inference_graph.xml -d MYRIAD -i cam

It's possible your previous error was due to a corrupted file and not the python version. I used python version 3.5.3. To use a webcam with the raspberry pi, you can use -i cam.

 

Regards,

Jesus

0 Kudos
abhi1
Beginner
2,804 Views

So, a R2 compiled version can used on 2019 R1.1.144 for inference?

 

So what I understood is. Install R2 2019 on raspbi. But use R1.1 for inference?

0 Kudos
abhi1
Beginner
2,201 Views

I have used build_sample.sh and everything passed well without any error. What seems corrupted? .XML converted model or object_detection_ sample_ssd?

0 Kudos
Reply