Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Truong__Dien_Hoa
New Contributor II
454 Views

Model Optimizer for tensorflow model - Object detection ssd_mobilenet_v1

Hi,

I have a tensorflow frozen model .pb obtained from this tutorial github.com/jkjung-avt/hand-detection-tutorial .

This tuto based on the original tensorflow object detection repository so I think there aren't something special.

I tried to optimize the model with :

 python3 mo_tf.py --input_model /home/ben/hand-detection-tutorial/model_exported/frozen_inference_graph.pb

But I encountered an error below:

[ ERROR ]  Shape [-1 -1 -1  3] is not fully defined for output 0 of "image_tensor". Use --input_shape with positive integers to override model input shapes.

So I retried with putting shape

python3 mo_tf.py --input_model /home/ben/hand-detection-tutorial/model_exported/frozen_inference_graph.pb --input_shape [1,300,300,3]

Still got an error:

[ ERROR ]  Shape is not defined for output 0 of "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Slice".
[ ERROR ]  Cannot infer shapes or values for node "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Slice".
[ ERROR ]  Not all output shapes were inferred or fully defined for node "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Slice".

 

Someone have idea how I can fix the problem ? Thank you in advance

0 Kudos
7 Replies
Talbi__Ahmed
Beginner
454 Views

Hi, 
Which model did you train? if it's the SSD you have to specify using the .json file as specified in the tutoriel. <INSTALL_DIR>/deployment_tools/model_optimizer/extensions/front/tf

this is based on the tensorflow object detection api so for the ssd you should use ssd_v2_support.json.

As first step you should try to convert the frozen pretrained model (a good exercise and helps you to understand how to use the mo_tf script) adapt the following command:
./mo_tf.py --input_model <Path_Model>/frozen_inference_graph.pb --tensorflow_object_detection_api_pipeline_config <Path to config file>/pipeline.config --tensorflow_use_custom_operations_config extensions/front/tf/ssd_v2_support.json --output="detection_boxes,detection_scores,num_detections" --output_dir Desktop --reverse_input_channels --log_level DEBUG

After retraining you will most probably have an Error with the tensorflow version (open vino uses 1.2 and Object detection api have a trouble with it) to solve it create a virtual env and install the same tensorflow you are using to export the models.

cheers.

Jakub
Beginner
454 Views

instead of:

--input_shape [1,300,300,3]

try using:

-b 1

Truong__Dien_Hoa
New Contributor II
454 Views

Thank you so much @Jakub. I understand better how open vino works now.

Actually now I stuck in another bug that can not parse the config file

[ ERROR ]  Failed to convert tokens to dictionary: Wrong character "Use" in position 62
[ ERROR ]  Failed to generate dictionary representation of file.

You find the config file in the attachment

Sorry for bother you so much, I am not familiar with Openvino and also tensorflow. I worked before with Pytorch and at first try to convert the model to .onnx file to use with open vino, but there are some functions are not implemented yet so I switched to tensorflow.

Jakub
Beginner
454 Views

Hi @Truong,

 

Have a look at this thread: https://software.intel.com/en-us/forums/computer-vision/topic/785586 . There is also the "[ ERROR ]  Failed to convert tokens to dictionary:" error

 

I hope it will help you,

Jakub

Truong__Dien_Hoa
New Contributor II
454 Views

Thank you so much @Jakub, I just asked a question there.

Truong__Dien_Hoa
New Contributor II
454 Views

Hi @Talbi Ahmed , Seems a long time since my post but I would like to say thank you. I was not pay attention and was thinking you and @jakub are the same person. Today I revisited this thread and solve my new problem with tensorflow version as you mentionned openvino does not support the latest tensorflow :D

Regards,

Hoa

GKund1
Beginner
454 Views

 

I had the same problem, I had a layer defined in tf via:

model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu,input_shape= x_train.shape[1:]))

print(x_train.shape[1:])) gives (784,)

so I used the model optimizer with  -- input_shape[784] which gave errors - eventually I found your -b 1

please explain why this is the solution. Thanks Gerd

Jakub wrote:

instead of:

--input_shape [1,300,300,3]

try using:

-b 1

Reply