Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

'openvino diagnose detection error'


Dear Intel community

I am trying to use openvino with FPGA support, I am using 'Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA' and my software configuration is ubuntu 16.04 with kernel version 4.14.20-041420-generic

I am following the steps for installation of openvino with FPGA support, those are mentioned in this link->

I have skipped first two steps those were:

1 Configure the Intel® Arria® 10 GX FPGA Development Kit

2 Program the Intel® Arria® 10 GX FPGA Development Kit

after that i followed the remaining steps, but the problem is here:-

When i ran:-> aocl diagniose

My output for above command was:-

root@iwave:/home/iwave/Downloads/fpga_support_files# aocl diagnose
No devices attached for package:

Call "aocl diagnose <device-names>" to run diagnose for specified devices
Call "aocl diagnose all" to run diagnose for all devices

I am using 'Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA' hardware, I have to followed same steps which they mentioned in this link -> or I have to followed some another steps for configured  'Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA' for openvino.

please anybody guide me to sort out this issue


0 Kudos
3 Replies

Hi Ram,

From the aocl diagnose output you've provided the AOCL_BOARD_PACKAGE_ROOT enviornment variable, is not pointing to the BSP for the device you are using, 'Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA'. I will look into the changes you can make and get back to you promptly.



0 Kudos


At this time, we do not have the updated instructions for R5. For the time being, please use these instructions for the 'Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA' hardware.

If you come to another problem, start another discussion to receive the appropriate assistance.

Thank you,


0 Kudos

Dear Micael

Thanx for support, I have configured the Intel PAC with arria10 GX FPGA for open-vino R3 (using centos) and i have executed some demo samples on HETERO plugin, Now FPGA is working fine with openvino : but i have one issue discussed below:-

I have trained 'ssd_inception_v2_coco' pre-model on my custom object, After the training I have converted my custom object based trained model to I.R files using the model optimizer,

I feed this I.R files to inference engine and  it is detecting my custom object when my plugin is CPU 

But the problem is am not able to run my inference on HETERO plugin , -d HETERO:FPGA,CPU


it throwing the error mentioned below:-


[iwave@localhost Release]$ crossroad_camera_sample -i /home/iwave/Downloads/raccoon-180.jpg -m /home/iwave/Downloads/Custom_object/frozen_inference_graph.xml -d HETERO:FPGA,CPU
    API version ............ 1.2
    Build .................. 13911
[ INFO ] Parsing input parameters
[ INFO ] Reading input
[ INFO ] Loading plugin HETERO:FPGA,CPU

    API version ............ 1.2
    Build .................. heteroPlugin
    Description ....... heteroPlugin
[ INFO ] Loading plugin CPU

    API version ............ 1.2
    Build .................. lnx_20180510
    Description ....... MKLDNNPlugin
[ INFO ] Loading network files for PersonDetection
[ INFO ] Batch size is forced to  1
[ INFO ] Checking Person Detection inputs
[ INFO ] Checking Person Detection outputs
[ INFO ] Loading Person Detection model to the HETERO:FPGA,CPU plugin
Emulator Context callback: Could not allocate a command queue
Location: /home/user/teamcity/work/scoring_engine_build/releases_openvino-2018-r3/thirdparty/dla/api/src/dla_runtime.cpp:115
Failed clCreateCommandQueue
[ ERROR ] DLA error: Error: Failure due to generic standard exception

By bitstream file is -> aocl program acl0 <2-0-1_RC_FP16_SSD300.aocx>

 Its seems like my bite-stream file is not appropriate for my custom object detection model, if i am right then please guide me how to generate bit-streams for custom object ????? or how to generate bit-stream for any other pre-trained model

Ones again thank u very much for your support




0 Kudos