Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6570 Diskussionen

Google YAMNet inference not working with OpenVino, but works with tensorflow lite

j_karthic
Einsteiger
1.429Aufrufe

I am trying to run to Google YAMNet on my Windows 10, x64 based machine using openvino. But I am getting wrong inference results irrespective of whatever input I pass to openvino.

But the same model is working fine with tensorflow lite backend. 

I have attached the zip file containing sample python code, model files and a test input to reproduce this issue.

You can set the backend as either "openvino" or "tflite", as an argument to "classify_audio" function called from main. TFlite will classify the audio input correctly. While OpenVino will classify the same input wrongly.

What is the reason for this problem? Is something wrong with openvino framework or am I using it wrong?

0 Kudos
7 Antworten
Zulkifli_Intel
Moderator
1.343Aufrufe

Hi j_karthic,

Thank you for reaching out.

 

Which OpenVINO version did you use to run with this model?

 

To use OpenVINO runtime, you need to convert the tflite model to IR.

In Convert a Tensorflow Lite Model to OpenVINO™ notebook shows how to convert the model using Model Converter and load the model in OpenVINO Runtime.

 

 

Regards,

Zul


j_karthic
Einsteiger
1.336Aufrufe

Hi Zul,

 

Thanks for the reply.
I used 2024.4.0 version of openvino, that comes when I did the pip installation of openvino.
As per this Intel document, https://docs.openvino.ai/2024/openvino-workflow/model-preparation/convert-model-tensorflow-lite.html
 converting the model is no longer needed for tensorflow lite model. Just copy-pasting the relavant note from the above link.

"TensorFlow Lite model file can be loaded by openvino.Core.read_model or openvino.Core.compile_model methods by OpenVINO runtime API without preparing OpenVINO IR first. Refer to the inference example for more details. Using openvino.convert_model is still recommended if model load latency matters for the inference application."

 

Nevertheless, I did try using the converted model. Converted model also gave the exact same results as the non-converted tflite model. It didn't really matter. Both models are giving out wrong results. I have attached the converted model files for your reference.

regards,
Karthick

Zulkifli_Intel
Moderator
1.312Aufrufe

Hi j_karthic,

 

Thank you for sharing. I tested on my side and observed that the predictions for both TFLite and OpenVINO are different. Prediction inconsistencies may occasionally arise from variations in input normalization, unsupported layers, or layer interpretation. We are investigating this and will get back to you soon.

 


Regards,

Zul


j_karthic
Einsteiger
1.175Aufrufe

Hi Zul,

Any updates on this? Any ETA on the resolution?

regards,

Karthick

Zulkifli_Intel
Moderator
896Aufrufe

Hi j_karthic,

 My apologies for not getting back to you sooner.

 

The following PR (https://github.com/openvinotoolkit/openvino/pull/28868) has addressed the issue, fix should be included in the next 2025.1 release. You can try to build from the master branch once PR is merged, or use the nightly release.

 

 

Regards,

Zul


j_karthic
Einsteiger
862Aufrufe

Hi Zul,

Thanks. Will try this out. 

Regards,
Karthick

Zulkifli_Intel
Moderator
701Aufrufe

This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.


Antworten