- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
I am trying to run to Google YAMNet on my Windows 10, x64 based machine using openvino. But I am getting wrong inference results irrespective of whatever input I pass to openvino.
But the same model is working fine with tensorflow lite backend.
I have attached the zip file containing sample python code, model files and a test input to reproduce this issue.
You can set the backend as either "openvino" or "tflite", as an argument to "classify_audio" function called from main. TFlite will classify the audio input correctly. While OpenVino will classify the same input wrongly.
What is the reason for this problem? Is something wrong with openvino framework or am I using it wrong?
コピーされたリンク
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hi j_karthic,
Thank you for reaching out.
Which OpenVINO version did you use to run with this model?
To use OpenVINO runtime, you need to convert the tflite model to IR.
In Convert a Tensorflow Lite Model to OpenVINO™ notebook shows how to convert the model using Model Converter and load the model in OpenVINO Runtime.
Regards,
Zul
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hi Zul,
Thanks for the reply.
I used 2024.4.0 version of openvino, that comes when I did the pip installation of openvino.
As per this Intel document, https://docs.openvino.ai/2024/openvino-workflow/model-preparation/convert-model-tensorflow-lite.html
converting the model is no longer needed for tensorflow lite model. Just copy-pasting the relavant note from the above link.
"TensorFlow Lite model file can be loaded by openvino.Core.read_model or openvino.Core.compile_model methods by OpenVINO runtime API without preparing OpenVINO IR first. Refer to the inference example for more details. Using openvino.convert_model is still recommended if model load latency matters for the inference application."
Nevertheless, I did try using the converted model. Converted model also gave the exact same results as the non-converted tflite model. It didn't really matter. Both models are giving out wrong results. I have attached the converted model files for your reference.
regards,
Karthick
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hi j_karthic,
Thank you for sharing. I tested on my side and observed that the predictions for both TFLite and OpenVINO are different. Prediction inconsistencies may occasionally arise from variations in input normalization, unsupported layers, or layer interpretation. We are investigating this and will get back to you soon.
Regards,
Zul
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hi Zul,
Any updates on this? Any ETA on the resolution?
regards,
Karthick
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hi j_karthic,
My apologies for not getting back to you sooner.
The following PR (https://github.com/openvinotoolkit/openvino/pull/28868) has addressed the issue, fix should be included in the next 2025.1 release. You can try to build from the master branch once PR is merged, or use the nightly release.
Regards,
Zul
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.
