i have a custom board and 'I've installed a VPU miniPCI with MyriadX (AI CORE X).
I use OpenVino toolkit normally with CPU and everything works fine. I have some problem when I want to use MyriadX. I set -d MYRIAD and I receive a
E: [xLink] [ 22771] dispatcherEventReceive:347 dispatcherEventReceive() Read failed -4 | event 0x7f30937fdc60 XLINK_CREATE_STREAM_RESP E: [xLink] [ 22773] eventReader:233 eventReader stopped E: [watchdog] [ 22775] sendPingMessage:132 Failed send ping message: X_LINK_ERROR E: [watchdog] [ 23775] sendPingMessage:132 Failed send ping message: X_LINK_ERROR
Otherwise trying using -d HDDL, I get
[14:26:41.4610]I[DeviceManager.cpp:520] Old worker(Wt4.1) is removed info: /home/jenkins/workspace/IE-Packages/HDDL/Ubuntu16/hddl_mvnc/XLink/shared/XLink.c:1188: info: /home/jenkins/workspace/IE-Packages/HDDL/Ubuntu16/hddl_mvnc/XLink/shared/XLink.c:1188: info: /home/jenkins/workspace/IE-Packages/HDDL/Ubuntu16/hddl_mvnc/XLink/shared/XLink.c:1457: Reset device address: 32 with device type 2 [14:26:43.0714]I[AutoBoot.cpp:166] Device 4.1-ma2480 reload success. [usblink_open:460] open vsc device succ:24,path=/dev/myriad0 [14:26:45.6228]I[DeviceManager.cpp:582] worker(Wt4.1) created on device(4.1), type(0) [usb_read:372] error=-1, total size is=36,leave size=36 E: [xLink] [ 0] dispatcherEventReceive:323 dispatcherEventReceive() Read failed -1 | event 0x7f8db0873eb0 USB_READ_REL_RESP E: [xLink] [ 0] eventReader:256 eventReader stopped
Please note that the model I used works fine with Movidius Stick 1 usb.
Thank you for reaching out! Could you please provide some additional information?
When you say the model works fine with the Intel Movidius Neural Compute Stick, are you simply connecting the USB and running your OpenVINO application (on the same system) with the -d MYRIAD parameter and it works?
Thank you for the quickly response.
N2930 Pico-ITX SBC, 4 GB DDR3L.
Yes, the model works fine with usb stick Movidius 1 in the same system using -d MYRIAD option
I installed an AI Core X to the mini PCI slot of a system with a fresh install of Ubuntu 16.04. I followed the getting started guide for the Neural Compute Stick and ran the demo with the following command successfully.
./demo_squeezenet_download_convert_run.sh -d MYRIAD
Could you check if the system is detecting the device with lsusb? If the demo runs on the Neural Compute Stick but not on the AI Core X, I recommend testing the AI Core X on a different system.
Running inference as an HDDL device requires additional setup steps, please follow the Steps for Intel® Vision Accelerator Design with Intel® Movidius™ VPU.
Than you for the superfast response.
I've already followed the guides you recommended, trying to run all the demos in the openvino toolkit.
I always have the same error.
With lsub, i detect the device.
In your opinion, is it a problem of system requirements ? CPU is Intel® AtomTM E3825 and 2 GB Ram.
I have to test it in other system, but now i have available only the system that i told you.
I don't have any systems with similar processors for me to test. Take a look at the list of system requirements for the Intel OpenVINO toolkit. Hopefully you can test the card on the other system.
Hi, while I am waiting for the new card to arrive, I purchased the movidius 2 USB stick with Myriad X and i solved the previous problem. By testing the person-vehicle-bike-detection-crossroad-0078 network on openvino for vehicle detection (http://docs.openvinotoolkit.org/latest/_person_vehicle_bike_detection_crossroad_0078_description_per...) ,
I noticed that, on my custom board, movidius 2 is faster than movidius 1 of more or less 100 ms.
On PC, i don't have this kind of problem and the difference between Mov 1 and Mov 2 is around 200 ms.
Here are some comparison data in milliseconds about inference time, using person_vehicle_bike_detection_crossroad_0078 on my custom board .
MOV 1 MOV 2
Why is the difference so low? Is the CPU also used to calculate the inference or is it entirely managed by the movidius stick?
Would you mind starting a new discussion with this topic? Also, please include steps to reproduce. Also, which model did you download and the command used in the model optimizer.