Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

Error when running benchmark_app with NCS2 (Myriad X) on x86_64 platform

blakec
Beginner
579 Views

Hi,

 

I try to get the Neural Compute Stick 2 to work on an Intel Xeon workstation on Ubuntu 20.04. However, I get the following error when running the demo benchmark app (see below). The benchmark works well on CPU device.

 

I have done the steps for installing the NCS2, including adding 'ubuntu' to the group 'users' and followed the steps for the setting up the usb rules.

 

The stick is plugged in and shows this when running lsusb:

Bus 001 Device 014: ID 03e7:2485 Intel Movidius MyriadX

What can I do to solve this problem?

 

Best,

Blake

 

./demo_benchmark_app.sh -d MYRIAD
target = MYRIAD
target_precision = FP16
[setupvars.sh] OpenVINO environment initialized


###################################################



Downloading the Caffe model and the prototxt
Installing dependencies
...

Run python3 /opt/intel/openvino_2021/deployment_tools/open_model_zoo/tools/downloader/downloader.py --name squeezenet1.1 --output_dir /home/ubuntu/openvino_models/models --cache_dir /home/ubuntu/openvino_models/cache

################|| Downloading squeezenet1.1 ||################

========== Retrieving /home/ubuntu/openvino_models/models/public/squeezenet1.1/squeezenet1.1.prototxt from the cache

========== Retrieving /home/ubuntu/openvino_models/models/public/squeezenet1.1/squeezenet1.1.caffemodel from the cache

========== Replacing text in /home/ubuntu/openvino_models/models/public/squeezenet1.1/squeezenet1.1.prototxt



###################################################

Install Model Optimizer dependencies
...

        - Inference Engine found in:    /opt/intel/openvino_2021/python/python3.8/openvino
Inference Engine version:       2021.4.1-3926-14e67d86634-releases/2021/4
Model Optimizer version:        2021.4.1-3926-14e67d86634-releases/2021/4
[WARNING] All Model Optimizer dependencies are installed globally.
[WARNING] If you want to keep Model Optimizer in separate sandbox
[WARNING] run install_prerequisites.sh "{caffe|tf|tf2|mxnet|kaldi|onnx}" venv


###################################################

Convert a model with Model Optimizer

Run python3 /opt/intel/openvino_2021/deployment_tools/open_model_zoo/tools/downloader/converter.py --mo /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py --name squeezenet1.1 -d /home/ubuntu/openvino_models/models -o /home/ubuntu/openvino_mo
dels/ir --precisions FP16

========== Converting squeezenet1.1 to IR (FP16)
Conversion command: /usr/bin/python3 -- /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py --framework=caffe --data_type=FP16 --output_dir=/home/ubuntu/openvino_models/ir/public/squeezenet1.1/FP16 --model_name=squeezenet1.1 '--input_shape=[1,
3,227,227]' --input=data '--mean_values=data[104.0,117.0,123.0]' --output=prob --input_model=/home/ubuntu/openvino_models/models/public/squeezenet1.1/squeezenet1.1.caffemodel --input_proto=/home/ubuntu/openvino_models/models/public/squeezenet1.1/squeezene
t1.1.prototxt

Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      /home/ubuntu/openvino_models/models/public/squeezenet1.1/squeezenet1.1.caffemodel
        - Path for generated IR:        /home/ubuntu/openvino_models/ir/public/squeezenet1.1/FP16
        - IR output name:       squeezenet1.1
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         data
        - Output layers:        prob
        - Input shapes:         [1,3,227,227]
        - Mean values:  data[104.0,117.0,123.0]
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP16
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       None
        - Reverse input channels:       False
Caffe specific parameters:
        - Path to Python Caffe* parser generated from caffe.proto:      /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo/utils/../front/caffe/proto
        - Enable resnet optimization:   True
        - Path to the Input prototxt:   /home/ubuntu/openvino_models/models/public/squeezenet1.1/squeezenet1.1.prototxt
        - Path to CustomLayersMapping.xml:      /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo/utils/../../extensions/front/caffe/CustomLayersMapping.xml
        - Path to a mean file:  Not specified
        - Offsets for a mean file:      Not specified
        - Inference Engine found in:    /opt/intel/openvino_2021/python/python3.8/openvino
Inference Engine version:       2021.4.1-3926-14e67d86634-releases/2021/4
Model Optimizer version:        2021.4.1-3926-14e67d86634-releases/2021/4
[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: /home/ubuntu/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.xml
[ SUCCESS ] BIN file: /home/ubuntu/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.bin
[ SUCCESS ] Total execution time: 3.67 seconds.
[ SUCCESS ] Memory consumed: 95 MB.



###################################################

Build Inference Engine samples

-- The C compiler identification is GNU 9.3.0
-- The CXX compiler identification is GNU 9.3.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Looking for C++ include unistd.h
-- Looking for C++ include unistd.h - found
-- Looking for C++ include stdint.h
-- Looking for C++ include stdint.h - found
-- Looking for C++ include sys/types.h
-- Looking for C++ include sys/types.h - found
-- Looking for C++ include fnmatch.h
-- Looking for C++ include fnmatch.h - found
-- Looking for strtoll
-- Looking for strtoll - found
-- Configuring done
-- Generating done
-- Build files have been written to: /home/ubuntu/inference_engine_samples_build
[ 22%] Built target gflags_nothreads_static
[ 44%] Built target ie_samples_utils
[ 72%] Built target format_reader
[100%] Built target benchmark_app


###################################################

Run Inference Engine benchmark app

Run ./benchmark_app -nstreams 1 -d MYRIAD -i /opt/intel/openvino_2021/deployment_tools/demo/car.png -m /home/ubuntu/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.xml -pc -niter 1000

[Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ]     /opt/intel/openvino_2021/deployment_tools/demo/car.png
[Step 2/11] Loading Inference Engine
[ INFO ] InferenceEngine:
        IE version ......... 2021.4.1
        Build ........... 2021.4.1-3926-14e67d86634-releases/2021/4
[ INFO ] Device info:
        MYRIAD
        myriadPlugin version ......... 2021.4.1
        Build ........... 2021.4.1-3926-14e67d86634-releases/2021/4

[Step 3/11] Setting device configuration
[Step 4/11] Reading network files
[ INFO ] Loading network files
[ INFO ] Read network took 13.89 ms
[Step 5/11] Resizing network to match image sizes and given batch
[ INFO ] Network batch size: 1
[Step 6/11] Configuring input of the model
Network inputs:
    data : U8 / NCHW
Network outputs:
    prob : FP32 / NCHW
[Step 7/11] Loading the model to the device
E: [global] [    259548] [Scheduler00Thr] dispatcherEventSend:61        Write failed -1

E: [xLink] [    259548] [Scheduler00Thr] sendEvents:998 Event sending failed
E: [global] [    260477] [Scheduler00Thr] dispatcherEventSend:53        Write failed (header) (err -1) | event XLINK_WRITE_REQ

E: [xLink] [    260477] [Scheduler00Thr] sendEvents:998 Event sending failed
E: [global] [    260477] [benchmark_app] addEvent:262   Condition failed: event->header.flags.bitField.ack != 1
E: [global] [    260477] [benchmark_app] addEventWithPerf:276    addEvent(event) method call failed with an error: 3
E: [global] [    260477] [benchmark_app] XLinkReadData:156      Condition failed: (addEventWithPerf(&event, &opTime))
E: [ncAPI] [    260477] [benchmark_app] ncGraphAllocate:2070    Can't read input tensor descriptors of the graph, rc: X_LINK_ERROR
E: [global] [    272287] [Scheduler00Thr] dispatcherEventSend:53        Write failed (header) (err -4) | event XLINK_RESET_REQ

E: [xLink] [    272287] [Scheduler00Thr] sendEvents:998 Event sending failed
[ ERROR ] Failed to allocate graph: NC_ERROR
Error on or near line 221; exiting with status 1

 

0 Kudos
2 Replies
Peh_Intel
Moderator
541 Views

Hi blakec,


Thanks for reaching out to us.


I’ve validated that Benchmark Demo of OpenVINO 2021.4.1 is working fine with Intel® Neural Compute Stick 2 (NCS2) on Ubuntu 20.04 LTS from my side.


I noticed that your NCS2 is detected by the system and MYRIAD plugin is also loaded when running the Benchmark Demo. However, XLink error arises when the model is loaded to the device.


Hence, I would suggest you try to plug in NCS2 to different USB ports (USB 3.0 and 2.0 ports as well). It would be better if you also have a chance to connect NCS2 using USB Hub 3.0 and 2.0. My suggestion is based on this previous post which had discussed about this similar issue and the issue was related to the USB port.


If the issue still persists after several trials with different USB ports and USB Hubs, please also try running other demos with MYRIAD plugin as well as Hello Query Device Sample and share the results with us.



Regards,

Peh


0 Kudos
Peh_Intel
Moderator
507 Views

Hi blakec,


This thread will no longer be monitored since we have provided possible solutions. If you need any additional information from Intel, please submit a new question. 



Regards,

Peh


0 Kudos
Reply