Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6503 Discussions

Empty model blob after compilation and [NOT IMPLEMENTED] Message

nano
Beginner
765 Views

Problem Description:

When using the compile_tool to generate a model blob like described here: https://docs.openvinotoolkit.org/latest/openvino_inference_engine_tools_compile_tool_README.html I get an empty blob object and the compile tool returns "[NOT IMPLEMENTED]"

Question:

Do I have to implement something to make the CPU compilation work? Is "CPU" not a valid identificator? What's the problem here?

System:

I tried this on my windows machine using openvino 2020.03 LTS and on my ubuntu machine v18.04 LTS with the newest apt install version.

On the ubuntu machine, using -d MYRIAD option leads to it working, but using -d CPU just returns [NOT IMPLEMENTED]. However I need the model to be compiled for CPUs.

Steps to reproduce for me (fresh installation):

- Download onnx alexnet from https://github.com/onnx/models/blob/master/vision/classification/alexnet/model/bvlcalexnet-8.onnx

- use model optimizer to generate xml and bin file like this: python mo_onnx.py --input_model path_to_onnx\bvlcalexnet-8.onnx --output_dir path_to_out

- run compile_tool -m /path_to_xml/file.xml -d CPU -o /path_to_out/out.blob

 

Scrollbacks:

After mo_onnx:

Model Optimizer arguments:
Common parameters:
- Path to the Input Model: ...\bvlcalexnet-8.onnx
- Path for generated IR: .../some_path/...
- IR output name: bvlcalexnet-8
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
ONNX specific parameters:
Model Optimizer version:

[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: ...\bvlcalexnet-8.xml
[ SUCCESS ] BIN file: ...\bvlcalexnet-8.bin
[ SUCCESS ] Total execution time: 4.00 seconds.

 

After compile_tool

C:\Program Files (x86)\IntelSWTools\openvino_2020.3.194\deployment_tools\inference_engine\bin\intel64\Release>compile_tool.exe -m D:\path\bvlcalexnet-8.xml -d CPU -o D:\alexnet.blob
Inference Engine:
API version ............ 2.1
Build .................. 2020.3.0-3467-15f2c61a-releases/2020/3
Description ....... API
[NOT_IMPLEMENTED]

0 Kudos
2 Replies
Iffa_Intel
Moderator
722 Views

Greetings,

 

Fyi, the compile_tool only supports Myriad, FPGA for now and myriad_compile supports only Myriad.

 

Documents will be updated accordingly by the developer soon.

 

Sincerely,

Iffa

 

0 Kudos
Iffa_Intel
Moderator
716 Views

Greetings,


Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question. 


Sincerely,

Iffa


0 Kudos
Reply