Intel® oneAPI Base Toolkit
Support for the core tools and libraries within the base toolkit that are used to build and deploy high-performance data-centric applications.
419 Discussions

could give intel-sycl allowed header into dpct? or only modify the source file?

yhmtsai
Beginner
1,628 Views

Hi,

Is it possible to give the intel code into dpct ?
we write our own cooperative group implemented by intel subgroup, which gives the same interface with cuda's one.
However, we can not use this as one header when we converting other cuda files by dpct.
It is failed in the first stage because some functions are obviously not in cuda.

current one workaround is to create a fake header which contains all class/functionality with empty implementation and then switch the header file.

Could dpct only modify the source file not the related header file in the same folder?
The header file is only valid in cuda stage. dpct do not copy and change the header file.

Thanks,
Mike



0 Kudos
10 Replies
AbhishekD_Intel
Moderator
1,611 Views

Hi Mike,


Thanks for reaching out to us.

Actually, The Intel DPC++ Compatibility Tool(DPCT) is designed to assists you in the migration of a program that is written in CUDA to a program written in Data-Parallel C++ (DPC++), which are based on modern C++. It will only migrate supported CUDA functionalities into its equivalent functions. 


So even if you will try to add some Intel provided libraries into your code they will not get modified. Only the catch is that the functionalities you want to add should be supported by DPC++.


It seems that you have your own implementation for some CUDA functionalities and you included them into a header and trying to migrate the source file. If I have interpreted it correctly then, your migration will fail to migrate that header(your implementation for CUDA) because there is no CUDA code available in that header.

So in such cases you can migrate your source code and mentioned the path of your header file at a time of compiling the source code. For Example:

srcDir/

|--- your_code.cu                         

|--- implemented_header.h

|--- dpct_out/

     |--- your_code.dp.cpp

$ dpcpp [options] your_code.dp.dpp -I../ 


Here, -I contain the path to that header(your implementation).

Or you can also go with your workaround. 


DPCT will only modify the specified source files and its associated files.

Hope this will help you in resolving your use-case. If you have more questions related to this issue please do post us. We will recommend you to give us a small reproducer so that we can give more details if you have further issues.



Warm Regards,

Abhishek



0 Kudos
yhmtsai
Beginner
1,591 Views

Hi Abhishek,

Sorry for late update.

It will be failed if I put `
#include <CL/sycl.hpp>` or `#include <dpct/dpct.hpp>` in the header.
more precisely, dpct.hpp also includes CL/sycl.hpp, so it is also failed.
The attachment contains four files.
tidx_dpct.hpp and tidx_dpct.cpp are the DPCPP-allowed files and tidx.cpp and tidx.hpp are CUDA-allowed files.

dpct tidx.cpp with changing tidx.hpp -> tidx_dpct.hpp gives the above error although it is not correct when it in cuda.

The question is how we transfer tidx.cpp to tidx_dpct.cpp with giving tidx_dpct.hpp and tidx.hpp in dpct.

Using own header like system header can avoid dpct changing the header file only when the header file is not in the same folder or child folder.
i.e. `-I .` or `-I child_folder/`, dpct still transfer the header.

If when the header is considered as system header, dpct will not check the nd_item<3> is needed or not.
It somehow makes sense, but it need to add item_ct1 after dpct process.

Sorry for this confusing questions and usage.
SYCL needs nd_item<3> to get the setting of kernel, which leads this complex situation...

Thanks,
Mike

0 Kudos
yhmtsai
Beginner
1,580 Views

Hi, 


When using tidx_dpct.hpp in tidx.cu, I forget to add --extra-arg="-I /glob/development-tools/versions/oneapi/beta09/inteloneapi/compiler/2021.1-beta09/linux/include/sycl" in dpct, so dpct can not find the header.
Is there any commands to find the include_path easily? I use LD_LIBRARY_PATH to get the hint currently.


dpct also gives the following warning,

/glob/development-tools/versions/oneapi/beta09/inteloneapi/compiler/2021.1-beta09/linux/include/sycl/CL/cl_version.h:22:9: warning: cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 220 (OpenCL 2.2) [-W#pragma-messages]
#pragma message("cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 220 (OpenCL 2.2)")


And obviously can not find `get_tidx()` because it only contains `get_tidx(nd_item)`
Thus, it need to give cuda-allowed header in dpct.

When the project keeps updating, always converting all related cuda file to dpcpp is somehow unneeded. Furthermore, when the files needs some manual fix or faces some error when converting original cuda file, dpct stops converting others.

Thanks,
Mike

0 Kudos
AbhishekD_Intel
Moderator
1,567 Views

Hi Mike,

 

Please apologize for the delay and thanks for the detailed description. So there are some points which I would like to suggest to you for solving your use-case.

 

  1. So it seems that you want to migrate your CUDA code (tidx.cu) considering tidx_dpct.hpp and not tidx.hpp. If you do it this way then:
    1. You will definitely have to give --extra-arg="-I <path_to_include_files_of_header>" while migrating your code because the included header is not related to CUDA.
    2. If you migrate the base code (tidx.cu + tidx_dpct.hpp) by changing the get_tidx() function to get_tidx(nd_item) as the header is using get_tidx(nd_item), then the CUDA kernel will no longer behave as CUDA kernel and logically it will be broken code mix of cuda and dpcpp. So it will not migrate properly. Please refer to the dpct_dpctheader dir of the attached file for more details.
    3. Hence for migration, you have to use the proper CUDA project that you can compile without any errors. So we will not suggest you to follow the above process.
  2. In such case what you can do is you can migrate the original base code (tidx.cu + tidx.hpp) and as you already have your own implementation for tidx.hpp you can replace this header with your implementation i.e with tidx_dpct.hpp. This will not give you all those errors and warnings. Please refer to the attached file for more details.
  3. The other way around you can comment out the entry point of the header from your code before migration and can add your own implementation of that header into your migrated DPCPP code.
  4. If you are updating your base project frequently and you feel migrating the same files is unneeded then you can update your changes to separate files and can migrate only those files without migrating the whole project again.

 

Hope the above details will help you to solve your use-case. Also, refer to the attached file for more details.

 

 

Warm Regards,

Abhishek

 

 

 

 

0 Kudos
AbhishekD_Intel
Moderator
1,528 Views

Hi,


Please give us an update on the provided details. Let us know if you have any more doubts related to this thread.


Thank you.


0 Kudos
yhmtsai
Beginner
1,518 Views

Hi Abhishek,

Sorry for the late reply.
I was trying several ways to porting the code under this situation.

  • I use the second way you mention for those relatively simple codes.
    For example, some simple id calculations based on the threadIdx or blockIdx.
    To avoid dpct porting all related headers under the same/child folder, I always copy the file to another folder. Thus, I also copy these simple headers to my work directory.

  • When the headers are relatively complex or the code will be somehow destroyed by dpct, I will implement the interface header (something like fake header but it only need interface)
    For example, tile_partition<subwarpsize>() -> `sycl::group<3> group = item_ct1.get_group()` by dpct.
    If using tile_partition<subwarpsize>(this_thread_block), still get the same converted result.I do not find a proper way to convert it back.
    Moreover, some kernels need item_ct1.
    If we comment it on some kernels which only have this to require item_ct1, dpct will not add item_ct1 for these kernel. It is another downside point if we comment some codes to avoid dpct error.

    The current way:
    create an interface header which contains those kernel which needs item_ct1 in dpcpp code or do not want too much dpct conversion. And then, switch the related header before and after dpct conversion 
    __device__ void array_add_t(double *val) {
        auto tidx = threadIdx.x; // dpct will add item_ct1 for it
        array_add(val); // the cuda kernel call
    }
    
    __device__ auto this_thread_block_t() {
        auto tidx = threadIdx.x;
        // do not want any dpct conversion. function alias in cuda header
        return this_thread_block_i();
        // return this_thread_block();
        // depends on whether dpct has error message when converting.
    }​

    The benefit from the interface header
    1. only extract the interface which might reduce some dpct conversion issue.
    2. dpct can help to add the item_ct1 when it needs.
    3. do not need to fix these functions when the script and interface are done.
    4. avoid dpct stopping due to these kernels which dpct has a lot of error on.
    downside:
    1. Need to use a script (sed) to replace them and it only relies on the function name currently.
    2. Need to extract the interface manually.
    3. (potential issue?) If some functions have the same function name with different usage, may need to write these function interface or add some conditions to convert it correctly.

any comments or concerns are welcome!

Best regards,
Mike

 

0 Kudos
AbhishekD_Intel
Moderator
1,475 Views

Hi,


Good to know that the provided details helped you and you are able to migrate your use-cases with that. The way you are dealing with the complex header can also be possible.

Also thanks for providing the benefits and suggestions, we will try this way out and also suggest to other users depending on their use-cases.


I will also suggest you to try the latest update beta-10 for your migration. Do let us know if you have any other doubts related to this thread.


Warm Regards,

Abhishek



0 Kudos
AbhishekD_Intel
Moderator
1,443 Views

Hi,


Please update us if you have any other issues related to this thread.



Warm Regards,

Abhishek


0 Kudos
yhmtsai
Beginner
1,434 Views

Hi Abhishek,

I do not have any issue related to it.
If I have any new issue, I will post another issue on the forum.

Thanks for your help

 

Best regards,
Mike

0 Kudos
AbhishekD_Intel
Moderator
1,425 Views

Hi,


Thanks for confirmation we are no longer monitoring this thread. Please post a new thread if you have any issues.



Warm Regards,

Abhishek


0 Kudos
Reply