Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6476 Discussions

Error when converting a .onnx file to a .blob file

kokonuts
Novice
2,689 Views

Hi all,

 

I just successfully converted my saved model (.pb) trained in a TF framework to both ONNX (.onnx) and IR (.xml and .bin). I tried converting the ONNX one to a .blob file directly so I can deploy this custom object detection model on my OAK camera.

 

I have tried many times but kept getting failed, and not sure what is actually going on as a newbie.

 

This is just a part of the error message, as there are too many lines. I'm willing to provide more info, as well as my onnx file if necessary.

Screen Shot 2023-08-09 at 2.41.19 AM.png

Screen Shot 2023-08-09 at 2.41.34 AM.png

 

Screen Shot 2023-08-09 at 2.44.57 AM.png

Regards,

Austin

0 Kudos
11 Replies
kokonuts
Novice
2,683 Views

I was trying to convert my IR (i.e., the .xml file) to .blob file directly but failed. I guess converting my saved model to an onnx file then to a blob file might be a better workflow?

0 Kudos
Aznie_Intel
Moderator
2,648 Views

 

Hi Kokonuts,

 

Thanks for reaching out.

 

First of all, ONNX models are supported via FrontEnd API where it is supported to be read by OpenVINO runtime API directly. You may directly use your onnx model to run an inference with OpenVINO.

 

Meanwhile, for blob creation, you may refer to this blob creation and memory utility documentation. If you want to create a blob file from your existing model, you can use the OpenVINO Compile Tool with this specific command:

./compile_tool -<path_to_model>/model_name.xml -d MYRIAD -o model_xml_file.blob

 

To import a blob with the network from a generated file into your application, use the InferenceEngine::Core::ImportNetwork method:

InferenceEngine::Core ie;

std::ifstream file{"model_name.blob"};

InferenceEngine::ExecutableNetwork = ie.ImportNetwork(file, "MYRIAD", {});

 

You may share your models for us to further investigate this on our end.

 

 

Regards,

Aznie


0 Kudos
kokonuts
Novice
2,642 Views

Hi @Aznie_Intel ,

 

Thank you for replying me so quickly! Unfortunately, I still have no clues about how to solve it because I am really unfamiliar with some of those principles, which is my problem

 

Do you mean using a compile tool with some certain command lines is different to using the conversion web app 'Luxonis MyriadX Blob Converter'? I prefer to use the handy conversion web app as it seems something wrong with my compile_tool setup, so I cannot successfully run the command line you provided.

 

Actually, it would be wonderful if you could take a loot at my model architecture, too see if there is a problem with inputs/outputs sizes. According to the error messages shown in the above screenshots, do you think it is because my inputs sizes are not fixed:

Screen Shot 2023-08-10 at 12.48.17 AM.png

 

And I uploaded my model files so you could go through the details from this: link 

 

Many thanks,

Austin

0 Kudos
Aznie_Intel
Moderator
2,616 Views

Hi Austin,

 

The Luxonis MyriadX Blob Converter is not part of the OpenVINO toolkit or Intel. In OpenVINO, the blob file can only be generated from Compile_tool. Please be noted that the Compile_tool is deprecated in OpenVINO 2023 version and only available until OpenVINO 2022.3 version. I would advise you to use the compile tool from the 2022.3 version of OpenVINO.

 

I've verified your ONNX and XML file with Benchmark_app and there is no issue with the models.

kokonuts.JPG

 

You should be able to generate the blob file using the compile_tool.

 

 

Regards,

Aznie

 

 

0 Kudos
kokonuts
Novice
2,603 Views

Hi Aznie,

 

It's great to know that you have done some testings on my model and thought it would be fine.

 

I wonder is there a more detailed documentation guiding me how to execute the compile_tool step by step. The link you provided seems can't give me too many ideas. So far I still don't know how to activate the compile_tool by running relative command lines, although I have read through something like this.

 

Any suggestions please, a lot of thanks.

 

Regards,

Austin

0 Kudos
Aznie_Intel
Moderator
2,586 Views

 

Hi Austin,

 

Compile tool is only available in Installer, Debian and rmp packages. If you are using the pip package the tool is not included. You have to install the OpenVINO 2022.3 Installer package by following this Install OpenVINO™ Runtime on Windows from an Archive File guide.

 

Then, you may find the tool as an executable file that can be run on both Linux and Windows. It is located in the <INSTALLROOT>/tools/compile_tool directory. Run the command below to use the Compile tool:

./compile_tool -<path_to_model>/model_name.xml -d MYRIAD -o model_xml_file.blob

 

Make sure your Intermediate Representation files (xml,bin) are generated from the same version of OpenVINO.

 

 

Regards,

Aznie

 

0 Kudos
kokonuts
Novice
2,578 Views

Hi @Aznie_Intel 

 

Thank you for clarifying this. Is there a compatible package for MacOS?

 

Cheers,

Austin

0 Kudos
Aznie_Intel
Moderator
2,570 Views

 

Hi Austin,

 

Here is the details to Install OpenVINO™ Runtime on macOS from an Archive File.

 

 

Regards,

Aznie


0 Kudos
kokonuts
Novice
2,494 Views

Dear Aznie,

 

Thank you for the link! Do you think the version of OpenVINO would matter. Does the version of Tensorflow that I used to train my model have anything to do with it (i.e., should I have to use a TF version that is compatible with the OpenVINO version)?

 

I was just recommended to use a 2022.1.0 version of OpenVINO by someone else, but not sure why doing so.

 

Regards,

Austin

0 Kudos
Aznie_Intel
Moderator
2,458 Views

Hi Austin,

 

It is recommended to use OpenVINO 2022.1 version if you MYRIAD plugin since Intel® Movidius ™ VPU based products (MYRIAD device) are not supported in 2022.3.0 release.

 

 

Regards,

Aznie


0 Kudos
Aznie_Intel
Moderator
2,338 Views

Hi Austin,


This thread will no longer be monitored since we have provided information. If you need any additional information from Intel, please submit a new question.



Regards,

Aznie


0 Kudos
Reply