Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6482 Discussions

Problem with inference on Openvino with MobileNet v2 using samples

GT0001
Beginner
3,157 Views

Hello

I have a problem of result's consistency when I perform inference on image with an OpenVino sample (hello_classification_sample) using mobileNet v2.

 

Here is a small summary of what i did

 

1) I took the weights of mobilenetV2 and I performed transfer learning on keras. I get correct inference with high values.

 

2) Starting from the model obtained in keras, I converted it in tensorflow obtaining the files .pb and .meta. Also here the inference on tensorflow is correct.

 

3)Using the script in openvino (mo_tf.py) I converted the tensorflow model to openVino format (.xml) . Model converted with success .

 

4) Using the OpenVino samples (hello_classification_sample) I always get the same prediction and the same class (note that classes of my net are 3)

 

Since the inference results of the model built in keras are the same as the model converted into tensorflow, I think the problem is in openVino or in the openVino sample.

 

Can someone help me or suggest another sample where I can try the classification of images using mobilenet ?

 

 

0 Kudos
1 Solution
Luis_at_Intel
Moderator
2,829 Views

Issue summary: transfer learning done on MobileNetV2 with Keras. Model was then converted to TensorFlow, inference works and predicts well. Problem occurs when inference is done with OpenVINO using the sample hello_classification c++, with TensorFlow model converted to IR format (.xml and .bin). Inference is inaccurate, classifying results into the wrong class.

OpenVINO Version: 2019.2.242

Operating System: Ubuntu 16.04

Model Optimizer command used: python3 mo_tf.py --input_model output.pb --input_shape [1,224,224,3]

 

Solution: When the IR model is compiled, you need to bake in some pre-processing. As MobileNetV2 network is being used as a base network in this case, you need to account for value scaling. Those MobileNetV2 numbers are [128,128,128] for mean values and [128] for scale values. For reference see other mean and scaling numbers here in case you are using other networks with OpenVINO.

Model Optimizer command solution: python3 mo_tf.py --input_model=output.pb --input_shape=[1,224,224,3] --reverse_input_channels --mean_values=[128,128,128] --scale_values=[128]

 

 

Regards,

@Luis_at_Intel​ 

View solution in original post

0 Kudos
5 Replies
Luis_at_Intel
Moderator
2,829 Views

Hi GT0001,

 

Thanks for reaching out. Let me ask you a couple of questions so I can provide my suggestion, which OpenVINO version are you using? What is the command you used to convert the model from TensorFlow to IR format? Which Operating System are you using? It would be great if possible to provide your model so I can try to convert and test it, if you don't want to share it publicly you can send me a private message with your files.

 

I will be waiting for your response.

 

Regards,

@Luis_at_Intel​ 

0 Kudos
GT0001
Beginner
2,829 Views

Hi @Luis_at_Intel​ 

I sent you a private message with the info anf files you requested.

 

Thank you so much

0 Kudos
Luis_at_Intel
Moderator
2,829 Views

Thank you! We can continue our conversation via PM and if we find a viable solution to the problem then we can share that here so others can benefit from it.

 

 

Regards,

@Luis_at_Intel​ 

0 Kudos
GT0001
Beginner
2,829 Views

Hi @Luis_at_Intel​ 

Yes for sure !!!

I hope that you can help me to find a solution for my AI project, giving the solution for all the intel AI community.

 

Thanks a lot

0 Kudos
Luis_at_Intel
Moderator
2,830 Views

Issue summary: transfer learning done on MobileNetV2 with Keras. Model was then converted to TensorFlow, inference works and predicts well. Problem occurs when inference is done with OpenVINO using the sample hello_classification c++, with TensorFlow model converted to IR format (.xml and .bin). Inference is inaccurate, classifying results into the wrong class.

OpenVINO Version: 2019.2.242

Operating System: Ubuntu 16.04

Model Optimizer command used: python3 mo_tf.py --input_model output.pb --input_shape [1,224,224,3]

 

Solution: When the IR model is compiled, you need to bake in some pre-processing. As MobileNetV2 network is being used as a base network in this case, you need to account for value scaling. Those MobileNetV2 numbers are [128,128,128] for mean values and [128] for scale values. For reference see other mean and scaling numbers here in case you are using other networks with OpenVINO.

Model Optimizer command solution: python3 mo_tf.py --input_model=output.pb --input_shape=[1,224,224,3] --reverse_input_channels --mean_values=[128,128,128] --scale_values=[128]

 

 

Regards,

@Luis_at_Intel​ 

0 Kudos
Reply