- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have download openvino_toolkit_runtime_raspbian_p_2019.3.334 on raspberry pi4b. I want to perform yolov3-tiny on raspberry pi4B. Thus, I have converted the weight of yolov3-tiny to IR. However, when I executed make - j2 object_detection_demo_yolov3_async, it can not find this file.
I have went to search it from opt\intel\inference_engine\samples\python_samples, but there is not any object_detection_demo_yolov3_async file in it.
please help me
thanks.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Brian,
Thanks for reaching out to us.
For your information, the package does not include the Open Model Zoo demo applications. You can download them separately from the Open Models Zoo repository with this command line:
git clone -b 2019 https://github.com/openvinotoolkit/open_model_zoo.git
Next, the steps for executing demo application is the same as the flow in Build and Run Code Samples. The changes that need to be made are:
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" /opt/intel/openvino/deployment_tools/inference_engine/samples/cpp
to
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" <clone_dir>/open_model_zoo-master/demos
and
make -j2 object_detection_sample_ssd
to
make -j2 < _demo_you_desired>
Regards,
Peh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Because we have a consecutive holidays in these days, we did not try it. I expect to try it tomorrow.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I executed the suggest you give, the problem had been solved. However, I meet a new problem. When I finished those commend, I tried to demo yolov3:
it show:
git clone https://github.com/openvinotoolkit/open_model_zoo.git (I guess it is the 2021 version.)
but when I did this:
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" <clone_dir>/open_model_zoo/demos
it happened lots of errors.
I think downloading version2019 on windows is a better choice. Thus, I search for the version2019 for windows, but I cannot find it.
What should I do?
thanks for your reply
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Brian,
For your information, Intel® Distribution of OpenVINO™ Toolkit version 2019 and older versions are no longer available to download. Furthermore, the yolo-v3-tf model only added to support since OpenVINO™ Toolkit version 2020.2 while the yolo-v3-tiny-tf model only added to support since OpenVINO™ Toolkit version 2021.1. As such, I would recommend you update your OpenVINO™ Toolkit version on your Raspberry Pi to version 2021.1 since the object_detection_demo_yolov3_async is still available in the open model zoo version 2021.1.
Regards,
Peh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have updated openvino to version 2021.1. However, when I try to cmake and use this command :
make -j2 object_detection_demo_yolov3_async
I meet problem like the attachment.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Brian,
Based on your error, the ie_plugin_dispatcher.hpp file is not found. This is because this .hpp file has been removed starting from OpenVINO™ 2021.1. You may refer to here.
I’ve validated that the CMake building can be built successfully if using the open model zoo version 2021.1 and OpenVINO™ version 2021.1.
Please clone the open model zoo version 2021.1 with the command below:
git clone -b 2021.1 https://github.com/openvinotoolkit/open_model_zoo.git
Note: Remember to run the setupvars.sh script before starting CMake building:
source /opt/intel/openvino/bin/setupvars.sh
Furthermore, I would like to share with you where to get the label file for this yolo-v3-tiny-tf. You may find this label file from the link below. You can just copy all the classes and paste them into a notepad.
https://github.com/openvinotoolkit/open_model_zoo/blob/master/data/dataset_classes/coco_80cl.txt
Hence, the completed command for executing the object_detection_demo_yolov3_async would be:
./object_detection_demo_yolov3_async -d MYRIAD -i 0 -m <model_path>/yolo-v3-tiny-tf.xml --labels <label_path>/coco_80cl.txt
Regards,
Peh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have executed the above commands you give, then I executed this command :
./object_detection_demo_yolov3_async -d MYRIAD -i 0 -m <model_path>/yolo-v3-tiny-tf.xml --labels <label_path>/coco_80cl.txt
but it occurred error like the attachment.
I appreciate your help very much.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Brian,
I only get this error: stoi when using the yolo-v3-tiny-tf model which downloaded and converted by Model Downloader and Model Converter of OpenVINO™ 2021.3. It is always advisable to use the same version of OpenVINO™, open model zoo and also use the Model Downloader in that OpenVINO™ version to download the model as well.
As such, I uploaded the yolo-v3-tiny-tf model which downloaded and converted by Model Downloader and Model Converter of OpenVINO™ 2021.1.
Furthermore, I would like to apologize that I made a mistake when specifying the input parameter in the command. For the older version of demo, the option for webcam as input should be written in:
-i cam
Regards,
Peh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
When I executed the command :
./object_detection_demo_yolov3_async -d MYRIAD -i cam -m ../../../frozen_darknet_yolov3_model.xml --labels ./coco.txt
It occurred error like the attachment.
Besides, I want to consult how to modify the code or use any methods to get the real-time prediction of yolov3 because I want to use the prediction to control a mechanical arm.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Brian,
I would like you have a try with my attached IR model and the previous command:
./object_detection_demo_yolov3_async -d MYRIAD -i 0 -m <model_path>/yolo-v3-tiny-tf.xml --labels <label_path>/coco_80cl.txt
For Python version of this demo, then the command should be:
python3 object_detection_demo_yolov3_async.py -d MYRIAD -i cam -m <model_path>/yolo-v3-tiny-tf.xml --labels <label_path>/coco_80cl.txt
Sorry for overlooking the version of the demos.
For your second question regarding modifying the codes in this demo for your use case, it can be very complex and not possible to be done. On the other hand, you may refer to these prebuilt open-source projects which take real-time predictions for further inferencing, for various use cases.
Regards,
Peh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Brian,
This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.
Regards,
Peh
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page