- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Which inference engine is used with the pretrained-model emotions-recognition-retail-0003?
Can I use object_detection_sample_ssd through the following code
~/openvino/inference_engine_cpp_samples_build/armv7l/Release/object_detection_sample_ssd -m ~/openvino/open_model_zoo/tools/downloader/intel/emotions-recognition-retail-0003/FP32/emotions-recognition-retail-0003.xml
-d MYRIAD -i /dev/video0
?
I believe I have built objection_detection_sample_ssd correctly, seemingly, into the path ~/openvino/inference_engine_cpp_samples_build/armv7l/Release/ (using cmake / make).
I am getting this messages as a result:
[ INFO ] InferenceEngine:
API version ............ 2.1
Build .................. 2020.3.0-3467-15f2c61a-releases/2020/3
Description ....... API
Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ] /dev/video0
[ INFO ] Loading Inference Engine
[ INFO ] Device info:
MYRIAD
myriadPlugin version ......... 2.1
Build ........... 2020.3.0-3467-15f2c61a-releases/2020/3
[ INFO ] Loading network files:
/home/pi/openvino/open_model_zoo/tools/downloader/intel/emotions-recognition-retail-0003/FP32/emotions-recognition-retail-0003.xml
/home/pi/openvino/open_model_zoo/tools/downloader/intel/emotions-recognition-retail-0003/FP32/emotions-recognition-retail-0003.bin
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ ERROR ] Can't find a DetectionOutput layer in the topology
Why am I getting this error? How is this error to be interpreted?
Is there some other problem in my code?
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hello centroid,
For your information, the package does not include the Open Model Zoo demo applications. You can download them separately from the Open Model Zoo repository with this command line:
git clone -b 2020.3 https://github.com/openvinotoolkit/open_model_zoo.git
Next, the steps for building demo application are as follows:
1. mkdir build && cd build
2. cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" <clone_dir>/open_model_zoo/demos
3. make -j2 <demo_you_desired>
The demo application will be built in the build/armv71/Release directory.
For your information, there are some compatibility issues of model version 2020.3 with OpenVINO™ 2020.3. Please download the model version 2019 from the link below:
https://download.01.org/opencv/2019/open_model_zoo/R3/20190905_163000_models_bin/
For the xml file, you can copy all the scripts (CTRL + A) and paste them into a notepad. Remove the first irrelevant sentence (This XML file...) and rename the notepad included the format file as <model_name>.xml. This copied xml can be worked also even it does not have the spacing which is different from the original xml file.
Regards,
Peh
コピーされたリンク
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hello centroid,
Thanks for reaching out to us.
You’re getting the error because the Object Detection C++ Sample SSD does not support the emotions-recognition-retail-0003 model.
The Object Detection C++ Sample SSD outputs an image (out_0.bmp) with detected objects enclosed in rectangles. It outputs the list of classes of the detected objects along with the respective confidence values and the coordinates of the rectangles to the standard output stream.
But the emotions-recognition-retail-0003 model is designed for recognition of five emotions ('neutral', 'happy', 'sad', 'surprise', 'anger').
The emotions-recognition-retail-0003 model is supported by the Interactive Face Detection Demo. And this model is supported by Inference Engine CPU, GPU, MYRIAD/HDDL and HETERO:FPGA,CPU.
All this information is available in the OpenVINO™ Toolkit documentation.
Regards,
Peh
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Thanks for your reply.
I installed openvino_toolkit_runtime_raspbian_p_2020.3.194 on raspbian on OS. Unfortunately, I do not find the inference engine "interactive_face_detection_demo" in my installation. Is this possible, or is this not part of this version? Do I have to install another version of openvino?
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
According to OpenVINO online documentation article Install OpenVINO™ toolkit for Raspbian* OS install package for Raspberry Pi platform is somewhat limited (due to limited capabilities of target platform). It does not include Model Optimizer, because conversion of model might require more resources than usually available on Raspberry Pi board, it does not include Open Model Zoo with demos and model downloader (again, full Open Model Zoo models download size is something about 20..30 Gb). It is recommended you download and convert models of your interest on ordinal system and then transfer necessary files to the board.
You also might review a description of how we build Open Model Zoo demos for Raspberry Pi board when work on OpenVINO ARM CPU plugin (it is available as a separate open source project on OpenVINO contrib). With this additional OpenVINO ARM CPU plugin you will be able to run inference both on ARM CPU and MyriadX device on RaspberryPi.
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
I was able to build all interactive_face_detection_demo on my raspberry pi thanks to the advice in this thread. Now I am facing the following Error: [ ERROR ] Face Detection network output layer should have 7 as a last dimension
pi@raspberrypi:~/Downloads/l_
InferenceEngine: 0xb6d4c010
[ INFO ] Parsing input parameters
[ INFO ] Reading input
[ INFO ] Loading device MYRIAD
MYRIAD
myriadPlugin version ......... 2.1
Build ........... 2020.4.0-359-21e092122f4-
[ INFO ] Loading network files for Face Detection
[ INFO ] Batch size is set to 1
[ INFO ] Checking Face Detection network inputs
[ INFO ] Checking Face Detection network outputs
[ ERROR ] Face Detection network output layer should have 7 as a last dimension
What is missing here?
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hello centroid,
For your information, the package does not include the Open Model Zoo demo applications. You can download them separately from the Open Model Zoo repository with this command line:
git clone -b 2020.3 https://github.com/openvinotoolkit/open_model_zoo.git
Next, the steps for building demo application are as follows:
1. mkdir build && cd build
2. cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" <clone_dir>/open_model_zoo/demos
3. make -j2 <demo_you_desired>
The demo application will be built in the build/armv71/Release directory.
For your information, there are some compatibility issues of model version 2020.3 with OpenVINO™ 2020.3. Please download the model version 2019 from the link below:
https://download.01.org/opencv/2019/open_model_zoo/R3/20190905_163000_models_bin/
For the xml file, you can copy all the scripts (CTRL + A) and paste them into a notepad. Remove the first irrelevant sentence (This XML file...) and rename the notepad included the format file as <model_name>.xml. This copied xml can be worked also even it does not have the spacing which is different from the original xml file.
Regards,
Peh
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hello centroid,
You’re getting the error because you assigned the demo to the wrong model.
This Interactive Face Detection C++ Demo executes the face-detection-adas-0001 model as the primary detection network for finding faces. It then executes four parallel infer requests for Age/Gender Recognition, Head Pose Estimation, Emotions Recognition, and Facial Landmarks Detection networks that run simultaneously. Face Detection model is required while the rest models are optional.
Hence, the command line for your case should be:
sudo ./interactive_face_detection_demo -i cam -m /home/pi/Downloads/l_openvino_toolkit_runtime_raspbian_p_2020.4.287/open_model_zoo/tools/downloader/intel/face-detection-adas-0001/FP32/face-detection-adas-0001.xml -d MYRIAD -m_em /home/pi/Downloads/l_openvino_toolkit_runtime_raspbian_p_2020.4.287/open_model_zoo/tools/downloader/intel/emotions-recognition-retail-0003/FP32/emotions-recognition-retail-0003.xml -d_em MYRIAD
You may refer to these parameters from the link below:
Furthermore, I noticed that you’re using FP32 of the model for running the demo with Intel® Neural Compute Stick 2 (NCS2). Based on the Supported Model Formats by VPU plugins, it is recommended to use FP16 of the model for NCS2 as it is most ubiquitous and performant.
Regards,
Peh
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Hello centroid,
This thread will no longer be monitored since we have provided solutions. If you need any additional information from Intel, please submit a new question.
Regards,
Peh
