Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Kulkarni__Vinay
New Contributor I
371 Views

Opencl code running in Openvino 2019.R1 by using aoc compiler for FPGA

Jump to solution

Hi,

    I wanted to use Openvino 2019.R1 for the bitstreams of Arria 10 Gx dev kit. I happen to use classification_sample code running on Arrai10 gx FPGA dev kit. I used " 2019R1_A10DK_FP11_ResNet_VGG.aocx" and succesfully ran on Arria-10 gx dev kit. Now i want to use this bitstream (via classification_sample) and add some postprocessing steps to be done in FPGA. Hence i used Opencl code to program just very simple hello_world.cl and tried calling the kernels of helloworld in this classification sample. I could able to compile the .cl code and call inside classification sample by clCreateProgram and build APIs. But when i try calling create kernel, i get error saying "CL_INVALID_KERNEL_NAME" (error :-46). I tried all combinations and the name of kernel also looks correct. I am not sure what is wrong , as this seem to use Opencl 1.0 .

Can you please check what is wrong with my code. or is there any issue with Opencl 1.0 ?

Attached is my source files.

 

 

My succesfully running of commands are :

source /opt/intel/openvino_fpga_2019.1.094/bin/setupvars.sh
source ~/Downloads/fpga_support_files/setup_env.sh

aocl program acl0  /opt/intel/openvino_fpga_2019.1.094/bitstreams/a10_devkit_bitstreams/2019R1_A10DK_FP11_ResNet_VGG.aocx

aoc /opt/intel/openvino_2019.1.094/deployment_tools/inference_engine/samples/classification_sample/hello_world.cl -o hello_world.aocx  -bsp-flow=flat  -board=a10gx

vinay@XeonServer:~/inference_engine_samples_build/intel64/Release$ ./classification_sample -i ~/Documents/picture/ILSVRC2012_val_00000010.JPEG -m /home/vinay/Resnet-50/ResNet-50-model.xml -d HETERO:FPGA,CPU

====> Here is where i am getting hit of INVALID_KERNEL_NAME.

 

 

 

0 Kudos
1 Solution
JesusG_Intel
Moderator
371 Views

Hello Vinay, engineering got back to me with an answer.

When you program the hello_world.aocx, it overwrites the 2019R1...aocx file, so inference engine can't find the OpenVINO FPGA kernels on the board since it's no longer there.  The short answer is no, you can't simply write your own FPGA kernel and have it run alongside the OpenVINO FPGA kernel.

Regards,  

View solution in original post

11 Replies
JesusG_Intel
Moderator
371 Views

Hello Vinay,

Can your code run on target device CPU?

You mention that you are running OpenCL 1.0. The software requirements list OpenCL 2.1. Could this be the cause?

Regards,

JesusG_Intel
Moderator
371 Views

Vinay, did you verify that an OpenCL example works as described in section 2.6 of the Intel FPGA SDK for OpenCL Pro Edition: Getting Started Guide?

Regards,

Kulkarni__Vinay
New Contributor I
371 Views

Yes Jesus L, I verified that it works outside of openvino environment. But when i try with openvino, i get error of "CL_INVALID_KERNEL_NAME" (error :-46).

Is Openvino and my custom kernel execution not supported ? If so , why is that ?

 

Regards

Vinay

 

JesusG_Intel
Moderator
371 Views

Hello Vinay,

I see you added more information in this thread. I will delete that thread and copy the information from there to here.

Can you try upgrading OpenVINO to 2020.R3 or 2020.R2?

 

 Thanks for your reply. I have already installed Openvino 2019.R1. In this 2019.R1, i am trying to use Opencl custom kernels (lets say hello world program) before or after running the classification bitstreams.

I am using Ubunutu 18.04 , Xeon machine with Arria-10 Gx FPGA dev kit.

I also have installed OpenCL sdk for FPGA 19.2 , via which aoc compiler is used to compile hello_world.cl.

But when i try to run the program inside openvino environment, i get errors such as "CL_INVALID_PROGRAM " or "CL_INVALID_KERNEL_NAME ".

Please find attached my code for reference.

    Whereas same program can be run outside of openvino.

If i disable the hello world program call inside main.cpp, i can faithfully run classification sample , whereas when i call opencl APIs to load hello_world , i get the errors as below :

 

vinay@XeonServer:~/inference_engine_debug_build/intel64/Debug$ ./classification_sample -i ~/Documents/picture/ILSVRC2012_val_00000010.JPEG -m /home/vinay/Resnet-50/ResNet-50-model.xml -d HETERO:FPGA,CPU
No of Platforms :1
0Query for number of platforms
01
0Query for platform name size 
0Intel(R) FPGA SDK for OpenCL(TM)
Intel(R) FPGA SDK for OpenCL(TM)0x7fca3c037a80
Querying platform for info:
==========================
CL_PLATFORM_NAME                         = Intel(R) FPGA SDK for OpenCL(TM)
CL_PLATFORM_VENDOR                       = Intel(R) Corporation
CL_PLATFORM_VERSION                      = OpenCL 1.0 Intel(R) FPGA SDK for OpenCL(TM), Version 19.2

Getdevices0
Querying device for info:
========================
CL_DEVICE_NAME                           = a10gx : Arria 10 Reference Platform (acla10_ref0)
CL_DEVICE_VENDOR                         = Intel(R) Corporation
CL_DEVICE_VENDOR_ID                      = 4466
CL_DEVICE_VERSION                        = OpenCL 1.0 Intel(R) FPGA SDK for OpenCL(TM), Version 19.2
CL_DRIVER_VERSION                        = 19.2
CL_DEVICE_ADDRESS_BITS                   = 64
CL_DEVICE_AVAILABLE                      = true
CL_DEVICE_ENDIAN_LITTLE                  = true
CL_DEVICE_GLOBAL_MEM_CACHE_SIZE          = 32768
CL_DEVICE_GLOBAL_MEM_CACHELINE_SIZE      = 0
CL_DEVICE_GLOBAL_MEM_SIZE                = 2147482624
CL_DEVICE_IMAGE_SUPPORT                  = false
CL_DEVICE_LOCAL_MEM_SIZE                 = 16384
CL_DEVICE_MAX_CLOCK_FREQUENCY            = 1000
CL_DEVICE_MAX_COMPUTE_UNITS              = 1
CL_DEVICE_MAX_CONSTANT_ARGS              = 8
CL_DEVICE_MAX_CONSTANT_BUFFER_SIZE       = 536870656
CL_DEVICE_MAX_WORK_ITEM_DIMENSIONS       = 3
CL_DEVICE_MEM_BASE_ADDR_ALIGN            = 8192
CL_DEVICE_MIN_DATA_TYPE_ALIGN_SIZE       = 1024
CL_DEVICE_PREFERRED_VECTOR_WIDTH_CHAR    = 4
CL_DEVICE_PREFERRED_VECTOR_WIDTH_SHORT   = 2
CL_DEVICE_PREFERRED_VECTOR_WIDTH_INT     = 1
CL_DEVICE_PREFERRED_VECTOR_WIDTH_LONG    = 1
CL_DEVICE_PREFERRED_VECTOR_WIDTH_FLOAT   = 1
CL_DEVICE_PREFERRED_VECTOR_WIDTH_DOUBLE  = 0
Command queue out of order?              = false
Command queue profiling enabled?         = true
0
Using AOCX: /home/vinay/inference_engine_debug_build/hello_world.aocx
-44Failed to load binary filecreate program-30
Build program:-44
-44Failed to create kernel
-48launch kernel
0finish
 

 

When i disable, i get faithful run as below :

vinay@XeonServer:~/inference_engine_debug_build/intel64/Debug$ ./classification_sample -i ~/Documents/picture/ILSVRC2012_val_00000010.JPEG -m /home/vinay/Resnet-50/ResNet-50-model.xml -d HETERO:FPGA,CPU
[ INFO ] InferenceEngine: 
    API version ............ 1.6
    Build .................. custom_releases/2019/R1_c9b66a26e4d65bb986bb740e73f58c6e9e84c7c2
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ]     /home/vinay/Documents/picture/ILSVRC2012_val_00000010.JPEG
[ INFO ] Loading plugin

    API version ............ 1.6
    Build .................. heteroPlugin
    Description ....... heteroPlugin
[ INFO ] Loading network files:
    /home/vinay/Resnet-50/ResNet-50-model.xml
    /home/vinay/Resnet-50/ResNet-50-model.bin
[ INFO ] Preparing input blobs
[ WARNING ] Image is resized from (480, 360) to (224, 224)
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the plugin
[ INFO ] Starting inference (1 iterations)
[ INFO ] Processing output blobs

Top 10 results:

Image /home/vinay/Documents/picture/ILSVRC2012_val_00000010.JPEG

classid probability
------- -----------
285     0.2507518  
277     0.2507518  
278     0.0922464  
282     0.0922464  
287     0.0922464  
259     0.0718416  
263     0.0339356  
356     0.0205830  
281     0.0205830  
283     0.0205830  

total inference time: 45.9720418
Average running time of one iteration: 45.9720418 ms

Throughput: 21.7523512 FPS

[ INFO ] Execution successful

 

Kulkarni__Vinay
New Contributor I
371 Views

I can not use latest openvino, as I need bitstreams of arria 10gx dev kit.

Hence I installed 2019.R1.

 

JesusG_Intel
Moderator
371 Views

Hello Vijay,

I heard back from engineering. Their response was, "You cannot run a hello world program through OpenVINO inference, as it doesn't contain an FPGA kernel." Does this make sense?

Regards,

Kulkarni__Vinay
New Contributor I
371 Views

Why? 

I am loading hello_world kernel from opencl API, as you can see in source file.

I have also compiled the hello_world.cl to create hello_world.aocx, which is getting loaded in opencl clCreateProgramWithBinary.

Are you trying to say Openvino does not support writing custom kernels using opencl for FPGA ?

Please confirm.

 

Regards

Vinay

 

Kulkarni__Vinay
New Contributor I
371 Views

Hi Jesus,

     Thanks for your efforts .

I understand that in openvino env , we can not write custom kernel. But what would be technical justification for non co-existence of these.

As i am sure many system designers would be eager to use your openvino's bitstreams and do some pre or post processing as per their need in FPGA (not in host). So for such users, this openvino is not usable ?

It would be great if you can let us know justification(technically or business point of view) for this.

 

 

JesusG_Intel
Moderator
371 Views

Hello Vinay, I will follow up with engineering on this issue. Please stay tuned.

Regards,

JesusG_Intel
Moderator
372 Views

Hello Vinay, engineering got back to me with an answer.

When you program the hello_world.aocx, it overwrites the 2019R1...aocx file, so inference engine can't find the OpenVINO FPGA kernels on the board since it's no longer there.  The short answer is no, you can't simply write your own FPGA kernel and have it run alongside the OpenVINO FPGA kernel.

Regards,  

View solution in original post

Kulkarni__Vinay
New Contributor I
371 Views

Thanks Jesus for confirming.

But i wonder, why cant i load openvino bitstream after running hello_world using clCreateProgramBinary , and then execute the openvino bitstream.

In fact this was my approach, but openvino was not allowing me to do it.

Anyway, thanks for chasing this.

We can close this thread.

 

Reply