Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.
6975 Discussions

OpenMP and SYCL offloading to GPU for Intel MKL

janez-makovsek
New Contributor I
895 Views

Dear All,

 

What is the current status for this release note from intel MKL 2021.4:

 

"oneMKL adds GPU support through DPC++ and OpenMP offload APIs"

 

Since DPC++ SYCL is more directly based on OpenCL and OpenMP 5.1 more indirectly, is there a difference in GPU support available? The docs state, that MKL has kernels for BLAS 1,2,3 fully implemented for GPU offload.

When used with SYCL, will these kernels work on any (AMD, NVidia, Intel) GPU?

When used with OpenMP (like the examples for offload provided with MKL), will this work also on AMD and NVidia or only Intel GPUs? 

 

Is there a difference between both openMP 5.1 and SYCL which GPU makers they support?

Personally I favor OpenMP and would like to see very much that MKL GPU support is generic. (for any GPU).

 

Thanks!
Atmapuri

 

0 Kudos
1 Solution
VidyalathaB_Intel
Moderator
869 Views

Hi,

 

Thanks for reaching out to us.

 

>>What is the current status for this release note from intel MKL 2021.4

 

Please refer to the below link for release notes of oneMKL 2021.4.0

https://www.intel.com/content/www/us/en/developer/articles/release-notes/onemkl-release-notes.html

 

>>is there a difference in GPU support available?

Could you please elaborate on the above statement like differences in what terms?

 

>>Is there a difference between both openMP 5.1 and SYCL which GPU makers they support

......When used with SYCL, will these kernels work on any (AMD, NVidia, Intel) GPU?

 

In this case, you can use opensource oneAPI where you will be able to work on NVIDIA GPU.

 

Please refer to the below link regarding opensource oneAPI for using oneMKL on NVIDIA GPU

https://github.com/oneapi-src/oneMKL

 

>>When used with OpenMP (like the examples for offload provided with MKL), will this work also on AMD and NVidia or only Intel GPUs? 

 

OpenMP offload is supported for Intel GPU's to run standard oneMKL computations

Please refer to the below link for more details

https://www.intel.com/content/www/us/en/develop/documentation/onemkl-developer-reference-c/top/openmp-offload/openmp-offload-for-onemkl.html

 

Hope the provided information helps.

 

Regards,

Vidya.

 

View solution in original post

0 Kudos
2 Replies
VidyalathaB_Intel
Moderator
870 Views

Hi,

 

Thanks for reaching out to us.

 

>>What is the current status for this release note from intel MKL 2021.4

 

Please refer to the below link for release notes of oneMKL 2021.4.0

https://www.intel.com/content/www/us/en/developer/articles/release-notes/onemkl-release-notes.html

 

>>is there a difference in GPU support available?

Could you please elaborate on the above statement like differences in what terms?

 

>>Is there a difference between both openMP 5.1 and SYCL which GPU makers they support

......When used with SYCL, will these kernels work on any (AMD, NVidia, Intel) GPU?

 

In this case, you can use opensource oneAPI where you will be able to work on NVIDIA GPU.

 

Please refer to the below link regarding opensource oneAPI for using oneMKL on NVIDIA GPU

https://github.com/oneapi-src/oneMKL

 

>>When used with OpenMP (like the examples for offload provided with MKL), will this work also on AMD and NVidia or only Intel GPUs? 

 

OpenMP offload is supported for Intel GPU's to run standard oneMKL computations

Please refer to the below link for more details

https://www.intel.com/content/www/us/en/develop/documentation/onemkl-developer-reference-c/top/openmp-offload/openmp-offload-for-onemkl.html

 

Hope the provided information helps.

 

Regards,

Vidya.

 

0 Kudos
VidyalathaB_Intel
Moderator
851 Views

Hi,

 

Thanks for accepting our solution.

As this issue is resolved we are closing this thread. Please post a new question if you need any additional information from Intel as this thread will no longer be monitored.

 

Have a Nice Day!

 

Regards,

Vidya.

 

0 Kudos
Reply