- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am trying to run to Google YAMNet on my Windows 10, x64 based machine using openvino. But I am getting wrong inference results irrespective of whatever input I pass to openvino.
But the same model is working fine with tensorflow lite backend.
I have attached the zip file containing sample python code, model files and a test input to reproduce this issue.
You can set the backend as either "openvino" or "tflite", as an argument to "classify_audio" function called from main. TFlite will classify the audio input correctly. While OpenVino will classify the same input wrongly.
What is the reason for this problem? Is something wrong with openvino framework or am I using it wrong?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi j_karthic,
Thank you for reaching out.
Which OpenVINO version did you use to run with this model?
To use OpenVINO runtime, you need to convert the tflite model to IR.
In Convert a Tensorflow Lite Model to OpenVINO™ notebook shows how to convert the model using Model Converter and load the model in OpenVINO Runtime.
Regards,
Zul
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Zul,
Thanks for the reply.
I used 2024.4.0 version of openvino, that comes when I did the pip installation of openvino.
As per this Intel document, https://docs.openvino.ai/2024/openvino-workflow/model-preparation/convert-model-tensorflow-lite.html
converting the model is no longer needed for tensorflow lite model. Just copy-pasting the relavant note from the above link.
"TensorFlow Lite model file can be loaded by openvino.Core.read_model or openvino.Core.compile_model methods by OpenVINO runtime API without preparing OpenVINO IR first. Refer to the inference example for more details. Using openvino.convert_model is still recommended if model load latency matters for the inference application."
Nevertheless, I did try using the converted model. Converted model also gave the exact same results as the non-converted tflite model. It didn't really matter. Both models are giving out wrong results. I have attached the converted model files for your reference.
regards,
Karthick
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi j_karthic,
Thank you for sharing. I tested on my side and observed that the predictions for both TFLite and OpenVINO are different. Prediction inconsistencies may occasionally arise from variations in input normalization, unsupported layers, or layer interpretation. We are investigating this and will get back to you soon.
Regards,
Zul
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Zul,
Any updates on this? Any ETA on the resolution?
regards,
Karthick
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page