OpenCL* for CPU
Ask questions and share information on Intel® SDK for OpenCL™ Applications and OpenCL™ implementations for Intel® CPU.
Announcements
This forum covers OpenCL* for CPU only. OpenCL* for GPU questions can be asked in the GPU Compute Software forum. Intel® FPGA SDK for OpenCL™ questions can be ask in the FPGA Intel® High Level Design forum.
1720 Discussions

Can opencl based code run on devcloud opeapi jupyterlab?

Wei-Chih
Novice
967 Views

Hello

 

I want to know if opencl based code can run on oneapi devcloud to run cpu, gpu and fpga if i add sycl device selector sentence on my opencl based code and perform on devcloud?

 

 

thanks

0 Kudos
5 Replies
cw_intel
Moderator
928 Views

Hi,

I'm not sure how you add the sycl selector to your opencl code, maybe you can provide a simple code.

Do you want to use your OpenCL kernel with SYCL program? You can do interoperability with backend API, and for the details you can refer to the SYCL 2020 spec https://registry.khronos.org/SYCL/specs/sycl-2020/pdf/sycl-2020.pdf, please go to Appendix C: OpenCL backend specification. A simple code is attached for your reference, not sure if it meets your requirements. You can compile the code with the command "dpcpp test.cpp -lOpenCL"


//test.cpp

#include<CL/sycl.hpp>

#include<iostream>

int main() {

constexpr size_t size = 16;

std::array<int, size> data;

for (int i = 0; i < size; i++) { data[i] = i; }

sycl::device dev(sycl::default_selector{});

sycl::context ctx=sycl::context(dev);

//auto ocl_dev=dev.get_native<cl::sycl::backend::opencl>();

//auto ocl_ctx=ctx.get_native<cl::sycl::backend::opencl>();

auto ocl_dev=sycl::get_native<cl::sycl::backend::opencl,sycl::device>(dev);

auto ocl_ctx=sycl::get_native<cl::sycl::backend::opencl,sycl::context>(ctx);

cl_int err = CL_SUCCESS;

//cl_command_queue ocl_queue = clCreateCommandQueue(ocl_ctx, ocl_dev,0,&err);

cl_command_queue ocl_queue = clCreateCommandQueueWithProperties(ocl_ctx, ocl_dev,0,&err);

sycl::queue q=sycl::make_queue<sycl::backend::opencl>(ocl_queue,ctx);

cl_mem ocl_buf = clCreateBuffer(ocl_ctx,CL_MEM_READ_WRITE | CL_MEM_COPY_HOST_PTR, size * sizeof(int), &data[0],&err);

sycl::buffer<int, 1> buffer =sycl::make_buffer<sycl::backend::opencl, int>(ocl_buf, ctx);

const char* kernelSource =

R"CLC(

kernel void add(global int* data) {

int index = get_global_id(0);

data[index] = data[index] + 1;

}

)CLC";

cl_program ocl_program = clCreateProgramWithSource(ocl_ctx,1,&kernelSource, nullptr, &err);

clBuildProgram(ocl_program, 1, &ocl_dev, nullptr, nullptr, nullptr);

cl_kernel ocl_kernel = clCreateKernel(ocl_program, "add", nullptr);

sycl::kernel add_kernel = sycl::make_kernel<sycl::backend::opencl>(ocl_kernel, ctx);

q.submit([&](sycl::handler& h){

                auto data_acc =buffer.get_access<sycl::access_mode::read_write, sycl::target::device>(h);

                h.set_args(data_acc);

                h.parallel_for(size,add_kernel);

                }).wait();

clEnqueueReadBuffer(ocl_queue, ocl_buf, CL_TRUE, 0, size*sizeof(int), &data[0], 0, NULL, NULL);

for (int i = 0; i < size; i++) {

    if (data[i] != i + 1) { std::cout << "Results did not validate at index " << i << "!\n"; return -1; }

}

std::cout << "Success!\n";

return 0;

}


Thanks


0 Kudos
Wei-Chih
Novice
923 Views

Hi

 

thanks for your reply

 

If I want to try this opencl project from github as below like, how can I do?

 

https://github.com/eejlny/BUDE-HARP/tree/master/bude_gpu

0 Kudos
cw_intel
Moderator
906 Views

Hi,


I think you can compile the code with command "dpcpp bude.cpp -lOpenCL" if OpenCL RT is installed on your machine. If you use Devcloud, I think it is installed.


Thanks


0 Kudos
Wei-Chih
Novice
901 Views

ok I will try it!

0 Kudos
cw_intel
Moderator
833 Views

Hi,


We haven't heard back from you for a long time so we are assuming that the provided details helped you in solving your problem. If you require additional assistance from Intel, please start a new thread. Any further interaction in this thread will be considered community only.


Thanks



0 Kudos
Reply