Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
254 Views

Model Optimizer incorrect output shape using --input

Hi,

I'm trying to optimize the deeplab model for a NCS2 using the following command:

$python /opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py --input_model frozen_inference_graph.pb --data_type FP16 --input "ImageTensor[1 224 224 3]" --output SemanticPredictions --output_dir ./irGraph/

But the problem I'm running into is that the first input for the output .xml file looks like:

<?xml version="1.0" ?> <net batch="1" name="frozen_inference_graph" version="6"> <layers> <layer id="0" name="sub_2/negate_/Output_0/Data__const" precision="FP16" type="Const"> <output> <port id="1"> <dim>1</dim> <dim>1</dim> <dim>3</dim> </port> </output> <blobs> <custom offset="0" size="12"/> </blobs> </layer> <layer id="1" name="ImageTensor" precision="FP16" type="Input"> <output> <port id="0"> <dim>1</dim> <dim>3</dim> <dim>224</dim> <dim>224</dim> </port> </output> </layer>

The problem being the shape of the first layer, which doesn't have the correct shape and as a result is unable to be processed by the inference engine.

 

What I'm trying to do is to use the model optimizer to cut that first layer off by specifiying the input tensor with --input. Any help would be appreciated.

 

 

Here is a link to the model:

http://download.tensorflow.org/models/deeplabv3_mnv2_pascal_train_aug_2018_01_29.tar.gz

 

Thanks,

 

Tim

 

0 Kudos
4 Replies
Highlighted
Employee
16 Views

Hi TCame,

 

Thank you for contacting Intel Customer Support, I am glad to assist you.

I am looking into your request, and will get back to you as soon as possible.

 

Regards,

 

David C.

Intel Customer Support Technician

A Contingent Worker at Intel

 

0 Kudos
Highlighted
Employee
16 Views

Hi TCame,

 

Thank you for your patience.

 

In order to convert the deeplab model to IR, you need to run the following command:

python3 ~/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model frozen_inference_graph.pb --input 1:mul_1 --output ArgMax --reverse_input_channels --input_shape [1,513,513,3] --data_type FP16

 

After converting your files, could you please test them with the segmentation demo and let me know if you still get errors?

 

Regards,

 

David C.

Intel Customer Support Technician

A Contingent Worker at Intel

0 Kudos
Highlighted
Beginner
16 Views

Hi David,

 

Yes that worked perfectly, thanks for your help.

 

Tim

0 Kudos
Highlighted
Employee
16 Views

Hi TCame,

 

Thank you for contacting us, I am happy to help you fix your issue.

 

I am going to close this thread, if you need further assistance, do not hesitate to contact us again.

 

Have a nice day!

 

Best Regards,

 

David C.

Intel Customer Support Technician

A Contingent Worker at Intel

 

0 Kudos