Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

openvino.runtime

Farhâd
Novice
5,105 Views

I am getting the error:

No module named 'openvino.runtime

after the command line: import openvino.runtime

openvino was installed on raspberry pi4 32 bit with the instructions on:

https://www.intel.com/content/www/us/en/support/articles/000057005/boards-and-kits.html

 

need some solution please.

 

 

 

0 Kudos
26 Replies
Wan_Intel
Moderator
875 Views

Hi Farhad,

Referring to System Requirements in OpenVINO™ Development Tools, Raspbian Operating System is not supported. Therefore, you must use the following Operating Systems to convert your PyTorch model into Intermediate Representation using Model Optimizer from OpenVINO™ Development Tools:

  • Ubuntu* 18.04 long-term support (LTS), 64-bit
  • Ubuntu* 20.04 long-term support (LTS), 64-bit
  • Red Hat* Enterprise Linux* 8, 64-bit      
  • macOS* 10.15.x versions  
  • Windows 10*, 64-bit

 

Next, the steps to convert YOLOv5s PyTorch model into Intermediate Representation are as follows:

  1. Clone YOLOv5 GitHub repository
    git clone https://github.com/ultralytics/yolov5

     

  2. Install pre-requisites:

    cd yolov5
    pip install -r requirements.txt
  3. Download YOLOv5s PyTorch model

  4. Convert YOLOv5s PyTorch model into YOLOv5s ONNX model:
    python export.py --weights yolov5s.pt --img 640 --batch 1 --include onnx​
  5. Convert YOLOv5s ONNX model into Intermediate Representation using Model Optimizer:
    mo --input_model <ONNX_model> --output <output_names> --data_type FP16 --scale_values=<input_names>[255] --input_shape= <input_shape> --input=<input_names>​
  6. The output tensor shape of the Intermediate Representation should be four instead of three:
    <Output: name_1 shape{1,255,80,80}>
    <Output: name_2 shape{1,255,40,40}>
    <Output: name_3 shape{1,255,20,20}>

 

Then, you can move the Intermediate Representation to your Raspberry Pi for inference purposes. On the other hand, you may refer to the following links for more information on how to convert your YOLOv5 model into Intermediate Representation:

 

Hope it helps.

 

 

Regards,

Wan

 

0 Kudos
Farhâd
Novice
862 Views

Thanks.

 

Please answer in full so we save time.

 

1. You did not describe the other options below. Please give examples:

 --scale_values=<input_names>[255] --input_shape= <input_shape> --input=<input_names

 

2. And why FP16 and not FP32?

My raspberry pi4B has 32 bits. Therefore shouldn't I use F32?

 

3. Also what are name_1, name2, name_3?

 

4. I also asked how to install the "mo" command. My previous questions are missing.

 

 

0 Kudos
Wan_Intel
Moderator
851 Views

Hi Farhad,

Referring to the previous link that I’ve shared, Accuracy Checker reshape error in yolov5, the command of YOLOv5 ONNX model into Intermediate Representation is:

 

mo \
--input_model yolov5s.onnx \
--output_dir <output_dir> \
--output Conv_198,Conv_233,Conv_268 \
--data_type FP16 \
--scale_values=images[255] \
--input_shape=[1,3,640,640] \
--input=images

 

 

To find out your --input, --input_shape, and --output, please refer to this thread for more information.

 After you convert your ONNX model into Intermediate Representation, the outputs for the model are shown as follows:

 

<Output: names[326] shape{1,255,80,80} type: f32>
<Output: names[378] shape{1,255,40,40} type: f32>
<Output: names[430] shape{1,255,20,20} type: f32>

 


To use “mo” command, you must install Model Optimizer on
Windows, Linux, or macOS via the following command:

 

pip install openvino-dev[onnx, pytorch]

 


On the other hand, Supported model format for Intel® NCS2 is FP16.

 

 

Regards,

Wan

 

0 Kudos
Farhâd
Novice
833 Views

Thank you.

 

This worked. I do get the output you showed.

 

But now I have another pesky error:

 

input_img shape= (1, 3, 640, 3)

output_layer= <ConstOutput: names[onnx::Reshape_329] shape{1,36,80,80} type: f32>
The input blob size is not equal to the network input size: got 5760 expecting 1228800

 

Why does this error occur a lot?

 

Please let me know how to solve this.

0 Kudos
Wan_Intel
Moderator
813 Views

Hi Farhad,

For your information, you may use the following command in Object Detection Python Demo to do inference on MYRIAD with YOLOv5 model:

python3 object_detection_demo.py -m yolov5s.xml -at yolov4 -i <path_to_input> -d MYRIAD

 

For more information on the list of options for Object Detection Python Demo, please refer to here.

 

On another note, you might be interested in Open-Source GitHub project: Object Detection & YOLOs by bethusaisampath. It demonstrates on how YOLOv5 model inferencing can be done using the Intel OpenVINO toolkit. Hope it helps.

 

 

Regards,

Wan


0 Kudos
Megat_Intel
Moderator
789 Views

Hi Farhâd,

This thread will no longer be monitored since we have provided suggestions. If you need any additional information from Intel, please submit a new question.

 

 

Regards,

Megat


0 Kudos
Reply