Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

different inference ouput when running the model in python and c++

Gupta__Shubham
New Contributor I
1,177 Views

Hi,

I created a classification model(12-output model) in tensorflow and generated a IR model using:

python3 mo_tf.py --input_model ./frozen_model.pb --output_dir ./ --input_shape [1,64,128,1] --scale 255.

Now, when I am running the inference in python language its giving desired output but when I am running the same model in  C++  language, its giving wrong results.

Here is my python code:

###########################################
plugin_dir = None
model_xml = 'frozen_model.xml'
model_bin = 'frozen_model.bin'
plugin = IEPlugin("CPU", plugin_dirs=plugin_dir)
# Read IR
net = IENetwork.from_ir(model=model_xml, weights=model_bin)
assert len(net.inputs.keys()) == 1
assert len(net.outputs) == 12
input_blob = next(iter(net.inputs))
out_blob = next(iter(net.outputs))
# Load network to the plugin
exec_net = plugin.load(network=net)

# Run inference
fileName = 'test.jpg'
processedImg = cv2.imread(filename)
res = exec_net.infer(inputs={input_blob: processedImg})
for key,value in res.items():
    idx = np.argsort(value[0])[-1]
    print (key,idx])

#############################################

for c++ code i am taking reference from hello classification sample.

Please help

Thank you

0 Kudos
7 Replies
Shubha_R_Intel
Employee
1,177 Views

Dear Shubham:

Rather than write your own Python code please run the following :

1) inference_engine\samples\classification_sample (C++)

and 

2)nference_engine\samples\python_samples\classification_sample.py

They should produce the same results on your model.

Thanks,

Shubha

0 Kudos
Gupta__Shubham
New Contributor I
1,177 Views

Hi Shubha,

I ran the inference from the sample codes and still i am getting correct output from the python one only.

1. C++:

cmd:

./classification_sample -m ./frozen_model.xml -i ./inputpython.jpg

output:

[ INFO ] InferenceEngine:
    API version ............ 1.4
    Build .................. 19154
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ]     ./inputpython.jpg
[ INFO ] Loading plugin

    API version ............ 1.5
    Build .................. lnx_20181004
    Description ....... MKLDNNPlugin
[ INFO ] Loading network files:
    ./frozen_model.xml
    ./frozen_model.bin
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the plugin
[ INFO ] Starting inference (1 iterations)
[ INFO ] Processing output blobs

Top 10 results:

Image ./inputpython.jpg

32 0.0913208 label #32
27 0.0880081 label #27
28 0.0757414 label #28
17 0.0752884 label #17
10 0.0681749 label #10
22 0.0543781 label #22
20 0.0500973 label #20
19 0.0431233 label #19
23 0.0426727 label #23
30 0.0385205 label #30


total inference time: 4.3145432
Average running time of one iteration: 4.3145432 ms

Throughput: 231.7742460 FPS

[ INFO ] Execution successful

 

2. Python:

cmd :

python3 classification_sample.py -m ./frozen_model.xml -i ./inputpython.jpg

output :

[ INFO ] Loading network files:
    /media/sr/Only Data here/Projects/ANPR/C++/config/frozen_model.xml
    /media/sr/Only Data here/Projects/ANPR/C++/config/frozen_model.bin
[ INFO ] Preparing input blobs
[ WARNING ] Image /home/sr/inputpython.jpg is resized from (64,) to (64, 128)
[ INFO ] Batch size is 1
[ INFO ] Loading model to the plugin
[ INFO ] Starting inference (1 iterations)
[ INFO ] Average running time of one iteration: 5.311012268066406 ms
[ INFO ] Processing output blob
[ INFO ] Top 10 results:
Image /home/sr/inputpython.jpg

0.9239681 label #16
0.0496760 label #12
0.0167194 label #10
0.0035272 label #26
0.0026951 label #14
0.0017902 label #20
0.0012748 label #21
0.0002022 label #28
0.0001061 label #24
0.0000123 label #11

Thanks

Shubham

0 Kudos
Shubha_R_Intel
Employee
1,177 Views

Dear Shubham,

That is definitely strange ! I have sent you a PM to send me your frozen model as a zip file.

Let me check it out - 

Thanks for using OpenVino !

Shubha

0 Kudos
Gupta__Shubham
New Contributor I
1,177 Views

Hi Shubha,

No worries, i have fixed the issue. Its working now, i am guessing the problem that i was facing with intel's classification sample(C++) is because of the way it is reading image as i am using grayscale image for my project.

 

Regards

Shubham

0 Kudos
Shubha_R_Intel
Employee
1,177 Views

Dear Shubam, congrats !

glad you fixed it !

Thanks for using OpenVino,

Shubha

0 Kudos
Ravichandran__Sangat
1,177 Views

Hi Shubam, 

Can you tell me how you fixed the issue?  I am using grayscale images as well and I have the exact same problem.

Best,

Sanga

0 Kudos
Shubha_R_Intel
Employee
1,177 Views

Dear Sanga.

Reading the code (main.cpp) for classification_sample_async I'm not seeing anything which would effect inability to read gray scale images as well as failure to properly classify them. Nothing whatsoever is hard-coded to expect grayscale images. In fact, the following line of code "converts" input to the gray-scale range:

inputInfoItem.second->setPrecision(Precision::U8);

Are you on the latest and greatest OpenVino 2019R2.01 ?

Thanks,

Shubha

 

 

0 Kudos
Reply