Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

human_pose_estimation_demo error on raspbian OS openvino?


When running for a period of time will report the following error?

pi@raspberrypi:~/workspace/inference_engine_vpu_arm/deployment_tools/inference_engine/samples/build/armv7l/Release $ ./human_pose_estimation_demo -i cam -m /home/pi/workspace/inference_engine_vpu_arm/deployment_tools/intel_models/human-pose-estimation-0001/FP16/human-pose-estimation-0001.xml -d MYRIAD
API version ............ 1.4
Build .................. 19154
[ INFO ] Parsing input parameters

** (human_pose_estimation_demo:5120): WARNING **: Error retrieving accessibility bus address: org.freedesktop.DBus.Error.ServiceUnknown: The name org.a11y.Bus was not provided by any .service files
E: [xLink] [ 847042] dispatcherEventSend:889 Write failed header -1 | event USB_WRITE_REQ

E: [xLink] [ 847154] dispatcherEventReceive:308 dispatcherEventReceive() Read failed -4 | event 0x6d4feb50 USB_READ_REL_RESP

E: [xLink] [ 847154] eventReader:256 eventReader stopped
E: [ncAPI] [ 847154] ncGraphQueueInference:3538 Can't send trigger request
E: [ncAPI] [ 847199] ncFifoDestroy:2888 Failed to write to fifo before deleting it!
[ ERROR ] Failed to queue inference: NC_ERROR

thank you very much!

0 Kudos
1 Reply

Running human pose estimation demo on raspberry pi doesn't seem to be supported yet. However, when trying out, the application was just stuck. I found out that doesn't work for me, at least when using the rasperry pi camera (haven't got a USB camera).

So I start the demo using

./human_pose_estimation_demo -m human-pose-estimation-0001-FP16.xml -d MYRIAD -i http://localhost:8090/test.mjpg

and having a ffserver running that is configured as following:

rt 8090
 # bind to all IPs aliased or not
 # max number of simultaneous clients
 MaxClients 10
 # max bandwidth per-client (kb/s)
 MaxBandwidth 1000
 # Suppress that if you want to launch ffserver as a daemon.

<Feed feed1.ffm>
 File /tmp/feed1.ffm
 FileMaxSize 10M

<Stream test.mjpg>
 Feed feed1.ffm
 Format h264
 VideoFrameRate 10
 VideoSize 640x480
 VideoBitRate 1024
 # VideoQMin 1
 # VideoQMax 100
 Strict -1

I start the server like so:

ffserver -f ./ffserver.conf & ffmpeg -v verbose -r 5 -s 640x480 -f video4linux2 -i /dev/video0 http://localhost:8090/feed1.ffm

Then, the demo always crashes with different error messages (unreadable chars, RAM access error, etc.).

I guess, the problems can be found somewhere in human_pose_estimater.cpp:

InferenceEngine::Blob::Ptr input = request.GetBlob(network.getInputsInfo().begin()->first);
    auto buffer = input->buffer().as<InferenceEngine::PrecisionTrait<InferenceEngine::Precision::FP32>::value_type *>();
    preprocess(image, static_cast<float*>(buffer));


because the code (and the preprocess method) use a FP32 precision. However, I haven't been able to successfully change the code to FP16. It would be interesting, if some @Intel could make the demo also work for RPi. I'm not convinved that this really is the problem, because if the NCS2 stick is attached to my notebook, the demo runs without code changes (even without the webcam workaround).

0 Kudos