- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am getting the error:
No module named 'openvino.runtime
after the command line: import openvino.runtime
openvino was installed on raspberry pi4 32 bit with the instructions on:
https://www.intel.com/content/www/us/en/support/articles/000057005/boards-and-kits.html
need some solution please.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Farhâd,
Thank you for reaching out to us.
Which OpenVINO™ Toolkit version did you install on your Raspberry Pi, and did you get this error when running the OpenVINO™ Python Demo from Open Model Zoo?
For your information, I got the same error as yours when I use OpenVINO™ 2021.4.2 version and ran OpenVINO™ Python Demo from Open Model Zoo 2022.2.0 version as shown here:
Please make sure that you git clone the specific branch of Open Model Zoo that match your OpenVINO™ Toolkit package version. As mentioned, since I am using OpenVINO™ 2021.4.2, I git cloned the 2021.4.2 branch of Open Model Zoo using the command below:
git clone --depth 1 -b 2021.4.2 https://github.com/openvinotoolkit/open_model_zoo
Regards,
Megat
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for the response.
I am not using a zoo model. I use my own model trained with pytorch that I convert to openvino IR using the "mo" command.
The openvino IR i get is v11.
As I mentioned above in my 1st post, I use your instruction for the raspberry pi4b:
https://www.intel.com/content/www/us/en/support/articles/000057005/boards-and-kits.html
which says i should get openvino v 2021.3 .
Please help.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Farhâd,
For your information, OpenVINO™ introduced the new API 2.0 and a new OpenVINO™ Python API starting from OpenVINO™ 2022.1 releases.
The error you got is because OpenVINO™ 2021.3 uses openvino.inference_engine to Create a Core Object while OpenVINO™ API 2.0 uses openvino.runtime to Create a Core Object. You can refer to Changes to Inference Pipeline in OpenVINO API v2 for more information.
Since you have installed OpenVINO™ 2021.3, you need to use the Python inference code that implements the previous Inference Engine API. You can refer to OpenVINO™ 2021.3 Python sample codes here. You also need to convert your PyTorch model to an IRV10 model. I would suggest using the same OpenVINO™ version when converting your model.
Regards,
Megat
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks.
1. Can API 2.0 be installed on raspberry pi4b? If so please send me the link to the instructions.
2. I do not know how to convert your PyTorch model to an IRV10 model. Can you please refer me to a link or an instruction. I code i have used always give IRv11.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Farhâd,
API 2.0 is only included in OpenVINO™ versions starting from OpenVINO™ 2022.1 release. To use API 2.0, you need to install the latest OpenVINO™ 2022 releases on your Raspberry Pi.
For your information, you can build from source OpenVINO™ 2022.1.0 on Raspberry Pi using the installation guide you mentioned. Please note that you need to git clone OpenCV 4.5.5-openvino-2022.1.0 branch and OpenVINO™ 2022.1.0 branch using the command below:
git clone --depth 1 --branch 4.5.5-openvino-2022.1.0 https://github.com/opencv/opencv.git
git clone --depth 1 --branch 2022.1.0 https://github.com/openvinotoolkit/openvino.git
Regarding your second question, I observed that you posted a similar issue in another thread. Please refer to the answer there.
Regards,
Megat
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you.
1. So what I understand is that the installation guide I used for my RPi 4B is old, correct?
And that the 2 git clone commands should replace those on that guide.
2. What about the rest of the commands in the guide? Are they up to date? For example there is a "cmake-3.14.4" command there. Is that the correct version of cmake?
3. How about the openCV dependencies command line:
"sudo apt install git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev libatlas-base-dev python3-scipy"
4. Also the guide mentions Raspbian Stretch or Buster 32-bit. Are these still valid for API 2.0 installation?
5. Can you please create a new installation guide for RPi4 for API2.0? It would help avoid these issues.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Please also let me know how to remove the older version of opencv-openvino (4.5.2) . I still get the older version after I did the github clone for opencv ver 4.5.5
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Farhâd,
I have built from source OpenVINO™ 2022.1 on Raspberry Pi. I'm sharing the steps below:
1. Setting up build environment.
sudo apt update && sudo apt upgrade -y
sudo apt install build-essential
2. Installing Cmake from source
Fetch CMake from the Kitware* GitHub* release page, extract it, and enter the extracted folder:
cd ~/
wget https://github.com/Kitware/CMake/releases/download/v3.18.4/cmake-3.18.4.tar.gz
tar xvzf cmake-3.18.4.tar.gz
cd ~/cmake-3.18.4
Run the bootstrap script to install additional dependencies and begin the build:
./bootstrap
make -j4
sudo make install
3. Installing OpenCV from source
Install the following packages :
sudo apt install git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev libatlas-base-dev python3-scipy
Clone the repository from OpenCV* GitHub page, prepare the build environment, and build:
cd ~/
git clone --depth 1 --branch 4.5.5-openvino-2022.1.0 https://github.com/opencv/opencv.git
cd opencv && mkdir build && cd build
cmake –DCMAKE_BUILD_TYPE=Release –DCMAKE_INSTALL_PREFIX=/usr/local ..
make -j4
sudo make install
4. Downloading source code and installing dependencies
The open-source version of Intel® OpenVINO™ toolkit is available through GitHub. The repository folder is titled openvino.
cd ~/
git clone --depth 1 --branch 2022.1.0 https://github.com/openvinotoolkit/openvino.git
Fetch the submodules from the repository:
cd ~/openvino
git submodule update --init --recursive
Run the script to install the dependencies for Intel® OpenVINO™ toolkit:
sh ./install_build_dependencies.sh
5. Building
The first step to beginning the build is telling the system where the installation of OpenCV is. Use the following command:
export OpenCV_DIR=/usr/local/lib/cmake/opencv4
To build the Python API wrapper, install all additional packages:
cd ~/openvino/samples/python/
pip3 install -r requirements.txt
The toolkit uses a CMake building system to guide and simplify the building process. To build both the inference engine and the MYRIAD plugin for Intel® Neural Compute Stick 2, use the following commands. Remove all the backslashes (\) when running the commands below. The backslashes are used to inform that these commands are not separated.
cd ~/openvino
mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release \
-DCMAKE_INSTALL_PREFIX=/home/pi/openvino_dist \
-DENABLE_MKL_DNN=OFF \
-DENABLE_CLDNN=OFF \
-DENABLE_GNA=OFF \
-DENABLE_SSE42=OFF \
-DTHREADING=SEQ \
-DENABLE_OPENCV=OFF \
-DNGRAPH_PYTHON_BUILD_ENABLE=ON \
-DNGRAPH_ONNX_IMPORT_ENABLE=ON \
-DENABLE_PYTHON=ON \
-DPYTHON_EXECUTABLE=$(which python3.7) \
-DPYTHON_LIBRARY=/usr/lib/arm-linux-gnueabihf/libpython3.7m.so \
-DPYTHON_INCLUDE_DIR=/usr/include/python3.7 \
-DCMAKE_CXX_FLAGS=-latomic ..
make -j4
sudo make install
For your information, I have validated these steps on Raspbian Buster 32-bit. Only Raspbian Buster 32-bit and Raspbian Stretch 32-bit are validated on OpenVINO™. We apologize for the inconvenience and will inform the relevant team to update the guide.
To uninstall your previous OpenCV build, run the command sudo make uninstall from your OpenCV 4.5.2-openvino build directory. You can refer here for more information.
Regards,
Megat
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Megat:
Thank you for the help.
I followed your instructions until the end.
Then I tried checking to see if cv2 can be imported and got the following error.
>>> import cv2
RuntimeError: module compiled against API version 0xe but this version of numpy is 0xd
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/pi/.local/lib/python3.7/site-packages/cv2/__init__.py", line 8, in <module>
from .cv2 import *
ImportError: numpy.core.multiarray failed to import
Seems the numpy in the requirements.txt file is not the correct version. Is that right or is it something else?
Please le me know. Thank you again.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Now a new error:
>>> import openvino
>>>
>>> import openvino.runtime as ov
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/pi/openvino_dist/python/python3.7/openvino/runtime/__init__.py", line 18, in <module>
from openvino.pyopenvino import Dimension
ImportError: libopenvino.so: cannot open shared object file: No such file or directory
cannot do import openvino.runtime
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Megat:
Please ignore the other errors I reported yesterday. I was able to fix them.
I need a solution to the following. I was looking for an objection detection script in the zoo samples and found the classification_sample_async.py code.
Tried it as below and got strange numbers:
python3 ~/openvino/samples/python/classification_sample_async/classification_sample_async.py -m models/person-vehicle-bike-detection-crossroad-0078.xml -i images/walk.jpg -d MYRIAD
[ INFO ] Creating OpenVINO Runtime Core
[ INFO ] Reading the model: models/person-vehicle-bike-detection-crossroad-0078.xml
[ INFO ] Loading the model to the plugin
[ INFO ] Starting inference in asynchronous mode
[ INFO ] Image path: images/walk.jpg
[ INFO ] Top 10 results:
[ INFO ] class_id probability
[ INFO ] --------------------
[ INFO ] 673 2.0000000
[ INFO ] 589 2.0000000
[ INFO ] 785 2.0000000
[ INFO ] 533 2.0000000
[ INFO ] 540 2.0000000
[ INFO ] 652 2.0000000
[ INFO ] 659 2.0000000
[ INFO ] 547 2.0000000
[ INFO ] 722 2.0000000
[ INFO ] 568 2.0000000
[ INFO ]
[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
1. I don't understand the class id's and where they come from. How can I know which id corresponds to which class?
2. The probabilities are all 2.0! Doesn't make sense.
Thank you
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Farhâd,
For your information, Image Classification Async Python* Sample is an inference sample for Classification, not Object Detection. To run the sample, you need to use Classification Models which are available from the Open Model Zoo Public Pre-Trained Models
For Object Detection, I would suggest you use the Object Detection Python* Demo from our Open Model Zoo demo applications. To clone the Open Model Zoo repository, you need to git clone the specific branch of Open Model Zoo that matches your OpenVINO™ Toolkit version as follows:
git clone -b 2022.1.0 https://github.com/openvinotoolkit/open_model_zoo.git
cd open_model_zoo
git submodule update --init --recursive
If you encountered the error: “ModuleNotFoundError: No Module Named ‘openvino.model_zoo.model_api’”, you need to install the Python* Model API Package as follows:
pip install <omz_dir>/demos/common/python
On another note, here is the list of Supported Models for the Object Detection Python Demo.
Regards,
Megat
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Megat:
Thank you for the assistance.
Installed as above with Model API.
Ran the Object Detection Python Demo above but get errors. Could be because my model is built with YOLOv5?
Here is what I got:
python3 /home/pi/open_model_zoo/demos/object_detection_demo/python/object_detection_demo.py -m models/<xml model> -i images/<image> -at yolo -d MYRIAD
[ INFO ] OpenVINO Runtime
[ INFO ] build: custom_HEAD_cdb9bec7210f8c24fde3e416c7ada820faaaa23e
[ INFO ] Reading model models/best_10-28-22_epoch=100.xml
[ WARNING ] The parameter "input_size" not found in YOLO wrapper, will be omitted
[ INFO ] Input layer: images, shape: [1, 3, 512, 512], precision: f32, layout: NCHW
[ INFO ] Output layer: output0, shape: [1, 16128, 12], precision: f32, layout:
[ INFO ] The model models/best_10-28-22_epoch=100.xml is loaded to MYRIAD
[ INFO ] Device: MYRIAD
[ INFO ] Number of streams: MYRIAD_THROUGHPUT_STREAMS_AUTO
[ INFO ] Number of model infer requests: 5
Traceback (most recent call last):
File "/home/pi/open_model_zoo/demos/object_detection_demo/python/object_detection_demo.py", line 298, in <module>
sys.exit(main() or 0)
File "/home/pi/open_model_zoo/demos/object_detection_demo/python/object_detection_demo.py", line 260, in main
results = detector_pipeline.get_result(next_frame_id_to_show)
File "/home/pi/.local/lib/python3.7/site-packages/openvino/model_zoo/model_api/pipelines/async_pipeline.py", line 124, in get_result
result = self.model.postprocess(raw_result, preprocess_meta), {**meta, **preprocess_meta}
File "/home/pi/.local/lib/python3.7/site-packages/openvino/model_zoo/model_api/models/yolo.py", line 123, in postprocess
detections = self._parse_outputs(outputs, meta)
File "/home/pi/.local/lib/python3.7/site-packages/openvino/model_zoo/model_api/models/yolo.py", line 225, in _parse_outputs
detections += self._parse_yolo_region(out_blob, meta['resized_shape'], layer_params[1])
File "/home/pi/.local/lib/python3.7/site-packages/openvino/model_zoo/model_api/models/yolo.py", line 131, in _parse_yolo_region
predictions = permute_to_N_HWA_K(predictions, params.bbox_size, params.output_layout)
File "/home/pi/.local/lib/python3.7/site-packages/openvino/model_zoo/model_api/models/yolo.py", line 41, in permute_to_N_HWA_K
assert tensor.ndim == 4, tensor.shape
AssertionError: (1, 16128, 12)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Farhâd,
Thanks for your information.
For your information, we regret to inform you that YOLOv5 is not a Supported Models in Object Detection Python Demo.
The supported models for Object Detection Python Demo are shown as follow:
· architecture_type = centernet
ctdet_coco_dlav0_512
· architecture_type = ctpn
ctpn
· architecture_type = detr
detr-resnet50
· architecture_type = faceboxes
faceboxes-pytorch
· architecture_type = nanodet
nanodet-m-1.5x-416
· architecture_type = nanodet-plus
nanodet-plus-m-1.5x-416
· architecture_type = retinaface-pytorch
retinaface-resnet50-pytorch
· architecture_type = ssd
efficientdet-d0-tf
efficientdet-d1-tf
face-detection-0200
face-detection-0202
face-detection-0204
face-detection-0205
face-detection-0206
face-detection-adas-0001
face-detection-retail-0004
face-detection-retail-0005
face-detection-retail-0044
faster-rcnn-resnet101-coco-sparse-60-0001
faster_rcnn_inception_resnet_v2_atrous_coco
faster_rcnn_resnet50_coco
pedestrian-and-vehicle-detector-adas-0001
pedestrian-detection-adas-0002
pelee-coco
person-detection-0106
person-detection-0200
person-detection-0201
person-detection-0202
person-detection-0203
person-detection-0301
person-detection-0302
person-detection-0303
person-detection-retail-0013
person-vehicle-bike-detection-2000
person-vehicle-bike-detection-2001
person-vehicle-bike-detection-2002
person-vehicle-bike-detection-2003
person-vehicle-bike-detection-2004
product-detection-0001
retinanet-tf
rfcn-resnet101-coco-tf
ssd300
ssd512
ssd_mobilenet_v1_coco
ssd_mobilenet_v1_fpn_coco
ssd-resnet34-1200-onnx
ssdlite_mobilenet_v2
vehicle-detection-0200
vehicle-detection-0201
vehicle-detection-0202
vehicle-detection-adas-0002
vehicle-license-plate-detection-barrier-0106
vehicle-license-plate-detection-barrier-0123
· architecture_type = ultra_lightweight_face_detection
ultra-lightweight-face-detection-rfb-320
ultra-lightweight-face-detection-slim-320
· architecture_type = yolo
mobilefacedet-v1-mxnet
mobilenet-yolo-v4-syg
person-vehicle-bike-detection-crossroad-yolov3-1020
yolo-v1-tiny-tf
yolo-v2-ava-0001
yolo-v2-ava-sparse-35-0001
yolo-v2-ava-sparse-70-0001
yolo-v2-tf
yolo-v2-tiny-ava-0001
yolo-v2-tiny-ava-sparse-30-0001
yolo-v2-tiny-ava-sparse-60-0001
yolo-v2-tiny-tf
yolo-v2-tiny-vehicle-detection-0001
yolo-v3-tf
yolo-v3-tiny-tf
· architecture_type = yolov3-onnx
yolo-v3-onnx
yolo-v3-tiny-onnx
· architecture_type = yolov4
yolo-v4-tf
yolo-v4-tiny-tf
· architecture_type = yolof
yolof
· architecture_type = yolox
yolox-tiny
Hope it helps.
Regards,
Wan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks.
But the error seems to be about a dimensionality problem.
Can you please look into the error statement.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Farhad,
Thanks for your patience.
The error you encountered: “AssertionError: (1, 16128, 12)” was due to your model’s output shape being (1, 16128, 12) while tensor.ndim require 4, tensor.shape.
To solve this issue, you must convert your ONNX model into Intermediate Representation with the following command:
mo --input_model <ONNX_model> --output <output_names> --data_type FP16 --scale_values=<input_names>[255] --input_shape= <input_shape> --input=<input_names>
Disclaimer: YOLOv5 is not a Supported Models in Object Detection Python Demo.
Regards,
Wan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you.
1. Could you please describe the options and give some examples so i could understand better?
2. Originally I converted my pytorch model to onnx and IR. Could I have miss something in that conversion that caused the dimensionality problem. Like missing an option?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
How do you install "mo" on raspberry pi4B?
I have openvino 2022.1 installed. Python version 3.7.3
I tried the command :
>> pip install openvino-dev[onnx]
But got the error:
ERROR: Cannot install openvino-dev[onnx]==2021.3.0, openvino-dev[onnx]==2021.4.0, openvino-dev[onnx]==2021.4.1, openvino-dev[onnx]==2021.4.2, openvino-dev[onnx]==2022.1.0 and openvino-dev[onnx]==2022.2.0 because these package versions have conflicting dependencies.
The conflict is caused by:
openvino-dev[onnx] 2022.2.0 depends on rawpy~=0.17.1; python_version >= "3.7"
openvino-dev[onnx] 2022.1.0 depends on openvino==2022.1.0
openvino-dev[onnx] 2021.4.2 depends on openvino==2021.4.2
openvino-dev[onnx] 2021.4.1 depends on rawpy>=0.16.0
openvino-dev[onnx] 2021.4.0 depends on rawpy>=0.16.0
openvino-dev[onnx] 2021.3.0 depends on fast-ctc-decode>=0.2
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page