Hi, I am a new to OpenVINO.
I bought NCS2 and am using OpenVINO R5 (l_openvino_toolkit_p_2018.5.445).
I tried to run image classification using inception v3.
I enabled running inception v3 with OpenVINO inference engine after converting a model.
However, the results are different between OpenVINO and native tensorflow and I am not sure which steps I made mistook.
This is my steps.
I followed the steps (https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow#example_of_an_inception_v1_model_conversion), but for inception.v3
1. Download inception_v3_inference_graph.pb and inception_v3.ckpt
2. Run mo_tf.py with below command
python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py \
--input_model ./inception_v3_inference_graph.pb \
--input_checkpoint ./inception_v3.ckpt \
-b 1 \
--data_type FP32 \
--mean_value [127.5,127.5,127.5] \
--scale 127.5 \
3. Run classification_sample binary from deployment_tools/inference_engine/samples/classification_sample/classification_sample/main.cpp
./classification_sample CPU -i ./car.png -m ./inception_v3_inference_graph.xml -d CPU
Based on outputs from this execution, the binary read right files (e.g., xml, bin, and labels) and resized image rightly.
[ WARNING ] Image is resized from (787, 259) to (299, 299)
This is output.
437 0.1907818 label beach wagon
512 0.1797879 label convertible
818 0.1449670 label sports car
480 0.1041356 label car wheel
582 0.0903165 label grille
628 0.0790678 label limousine
752 0.0158710 label racer
469 0.0099501 label cab
706 0.0059072 label passenger car
865 0.0047262 label tow truck
I compared this with native Tensorflow.
I frozen the graph with the same inputs (i.e., inception_v3_inference_graph.pb and inception_v3.ckpt) used in model optimization in OpenVINO by referring to "Freezing the exported Graph" (https://github.com/tensorflow/models/tree/master/research/slim).
$ ./bazel-bin/tensorflow/examples/label_image/label_image --image=./car.png --graph=./frozen_inception_v3.pb
This is a result.
2019-01-31 20:36:48.179882: I tensorflow/examples/label_image/main.cc:259] convertible (512): 0.502721
2019-01-31 20:36:48.179919: I tensorflow/examples/label_image/main.cc:259] sports car (818): 0.213444
2019-01-31 20:36:48.179933: I tensorflow/examples/label_image/main.cc:259] car wheel (480): 0.0783127
2019-01-31 20:36:48.179946: I tensorflow/examples/label_image/main.cc:259] beach wagon (437): 0.0409413
2019-01-31 20:36:48.179961: I tensorflow/examples/label_image/main.cc:259] limousine (628): 0.0207342
Q1. Do I miss some steps?
Q2. Based on README.md in /inference_engine/samples/classification_sample, I can use this binary to run inception.v3 while it is used to demo samples (/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/demo).
Is it right? Or should I right new C++ codes to run applications using image classification with inception.v3.
Q3. I got this from summarize_graph.py
python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo/utils/summarize_graph.py --input_model ./inception_v3_inference_graph.pb
1 input(s) detected:
Name: input, type: float32, shape: (-1,299,299,3)
2 output(s) detected:
There are two outputs, but I only used "InceptionV3/Predictions/Reshape_1" when I ran mo_tf.py.
Is it ok? In the native tensorflow code , it also used the same output layer.
float input_mean = 0;
float input_std = 255;
string input_layer = "input";
string output_layer = "InceptionV3/Predictions/Reshape_1";
I attached the generated bin and xml files which I used to run inference engine.
Any comments will be appreciated.
This looks similar but not identical to https://software.intel.com/comment/1933099
I would study to get some ideas where this kind of errors and discrepancies can come in.
In https://software.intel.com/comment/1933099 we found at least 3 issues and after fixing results were very similar. in general the way that you pre-process the input can be very critical. Check again if mean/scale vales are correct if you need to reverse channels and test with input test vectors. You would have to debug this step by step and study intermediate results.
Thank you for reply.
> I would study to get some ideas where this kind of errors and discrepancies can come in.
Thanks. I used all example files from tensorflow and openvino without modifications. So, it would be easy to reproduce the problem. Please try with attached files (xml and bin) and see whether you can get the same accuracy numbers which I got. I also attached the input image (i.e., car.png).
> Check again if mean/scale vales are correct if you need to reverse channels and test with input test vectors.
In model optimization procedure, I input mean/scale values like below numbers which mentioned in Supported Unfrozen Topologies from the TensorFlow*-Slim Image Classification Model Library (Supported Unfrozen Topologies from the TensorFlow*-Slim Image Classification Model Library)
--mean_value [127.5,127.5,127.5] \
--scale 127.5 \
If you look at attached file (xml file), it adds mean_values and scale values as what I puts in model optimization procedure.
> if you need to reverse channels and test with input test vectors.
Could you elaborate it more details for debugging procedure? I will be happy to dive into the problem, but I am new to TF and OpenVINO.
Also, was classification_sample file from demo example in openvino designed to run inception.v3 with generated xml and bin? (It was my second question in the post). Based on your reply, I may need to modify classification_sample if pre-processing is required.
Or is there python examples or codes to run inception.v3 model for inference?
> Could you elaborate it more details for debugging procedure? I will be happy to dive into the problem, but I am new to TF and OpenVINO.
Please refer to
I was referring to --reverse_input_channels : Switch the input channels order from RGB to BGR
Please study https://software.intel.com/comment/1933099 and try to test with input vectors like in https://github.com/ngeorgis/pytorch_onnx_openvino
It should be relatively easy to get good results - very similar between frameworks. Also a good learning exercise.