Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6503 Discussions

Google YAMNet inference not working with OpenVino, but works with tensorflow lite

j_karthic
Beginner
411 Views

I am trying to run to Google YAMNet on my Windows 10, x64 based machine using openvino. But I am getting wrong inference results irrespective of whatever input I pass to openvino.

But the same model is working fine with tensorflow lite backend. 

I have attached the zip file containing sample python code, model files and a test input to reproduce this issue.

You can set the backend as either "openvino" or "tflite", as an argument to "classify_audio" function called from main. TFlite will classify the audio input correctly. While OpenVino will classify the same input wrongly.

What is the reason for this problem? Is something wrong with openvino framework or am I using it wrong?

0 Kudos
4 Replies
Zulkifli_Intel
Moderator
325 Views

Hi j_karthic,

Thank you for reaching out.

 

Which OpenVINO version did you use to run with this model?

 

To use OpenVINO runtime, you need to convert the tflite model to IR.

In Convert a Tensorflow Lite Model to OpenVINO™ notebook shows how to convert the model using Model Converter and load the model in OpenVINO Runtime.

 

 

Regards,

Zul


0 Kudos
j_karthic
Beginner
318 Views

Hi Zul,

 

Thanks for the reply.
I used 2024.4.0 version of openvino, that comes when I did the pip installation of openvino.
As per this Intel document, https://docs.openvino.ai/2024/openvino-workflow/model-preparation/convert-model-tensorflow-lite.html
 converting the model is no longer needed for tensorflow lite model. Just copy-pasting the relavant note from the above link.

"TensorFlow Lite model file can be loaded by openvino.Core.read_model or openvino.Core.compile_model methods by OpenVINO runtime API without preparing OpenVINO IR first. Refer to the inference example for more details. Using openvino.convert_model is still recommended if model load latency matters for the inference application."

 

Nevertheless, I did try using the converted model. Converted model also gave the exact same results as the non-converted tflite model. It didn't really matter. Both models are giving out wrong results. I have attached the converted model files for your reference.

regards,
Karthick

0 Kudos
Zulkifli_Intel
Moderator
294 Views

Hi j_karthic,

 

Thank you for sharing. I tested on my side and observed that the predictions for both TFLite and OpenVINO are different. Prediction inconsistencies may occasionally arise from variations in input normalization, unsupported layers, or layer interpretation. We are investigating this and will get back to you soon.

 


Regards,

Zul


j_karthic
Beginner
157 Views

Hi Zul,

Any updates on this? Any ETA on the resolution?

regards,

Karthick

0 Kudos
Reply