- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
My customer has train their AI model using PyTorch framework using Intel Sonoma Creek and they want to run the AI Inference on a VPU accelerator card.
They managed to convert their model from:
PyTorch -> ONNX -> FP16 IR Format
I have confirmed their FP16 model on my OpenVINO 2021 and showing similar error messages:
[ INFO ] Parsing input parameters
[ INFO ] Reading input
[ INFO ] Loading Inference Engine
[ INFO ] Device info:
[ INFO ] CPU
MKLDNNPlugin version ......... 2021.4
Build ........... 0
Loading network files
[ ERROR ] Unknown model format! Cannot find reader for model format: xml and read the model: model_barehand.xml. Please check that reader library exists in your PATH.
Customer have no issue running FP32 model in OpenVINO 2022 on CPU.
Customer also tried FP16 model person-detection retail 002 from intel it working ok
Can someone have a look into this?
Thanks.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi DarkHorse,
For your information, in the latest version of OpenVINO™, the error message has changed... earlier it explicitly reported that the found IR-version is not supported... now it fails with "Unknown model format! Cannot find reader for model format".
In short words, OpenVINO™ 2022 is able to load IRv10 (converted by Model Optimizer 2021.4) and IRv11 (converted by Model Optimizer 2022.1) into Inference Engine whereas OpenVINO™ 2021.4 is only able to load IRv10 and will produce the above error if loading IRv11.
Regards,
Peh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi DarkHorse,
Referring to this article, could you please check if both inference_engine_ir_reader.dll and inference_engine.dll are located in the following directory:
<INSTALL_DIR>\deployment_tools\inference_engine\bin\intel64\Release
Regards,
Wan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi DarkHorse,
Thank you for your question. If you need any additional information from Intel, please submit a new question as this thread is no longer being monitored.
Regards,
Peh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I see this same problem on the raspberry pi4b. Please let me know the solution.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page