- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I managed to compile intel llvm on a cluster with nvidia gpus. Everything works for nvidia targets, but I am not able to use cpu targets.
From [these instructions](https://intel.github.io/llvm-docs/GetStartedGuide.html) I figure that the opencl runtime for cpus is needed, but I can figure out how to integrate it my build as a normal user.
EDIT:
I tried to use as target `native_cpu`, but this option does not find any device. I get this error:
terminate called after throwing an instance of 'sycl::_V1::runtime_error'
what(): No device of requested type available. -1 (PI_ERROR_DEVICE_NOT_FOUND)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
So in the end things do work for non-root. For anyone who might get lost like me.
The procedure is to install oneapi base toolkit and then the code plug-.in. For example:
//Download oneapi offline
wget https://registrationcenter-download.intel.com/akdlm/IRC_NAS/163da6e4-56eb-4948-aba3-debcec61c064/l_BaseKit_p_2024.0.1.46_offline.sh
//Install in silent mode in a non-standard folder
chmod +x l_BaseKit_p_2024.0.1.46_offline.sh
./l_BaseKit_p_2024.0.1.46_offline.sh -a -s --eula accept --download-cache <temp_path>/tttt/ --install-dir <path>/intel/oneapi
//Download plugin for nvidia
curl -LOJ "https://developer.codeplay.com/api/v1/products/download?product=oneapi&variant=nvidia&version=2024.0.1&filters[]=12.0&filters[]=linux"
//Install in the oneapi folder
./oneapi-for-nvidia-gpus-2024.0.1-cuda-12.0-linux.sh -y --extract-folder <temp_path>/tttt/ --install-dir <path>/intel/oneapi
//For usage first set the environment right
. /scratch/project_2008874/cristian/intel/oneapi/setvars.sh --include-intel-llvm
// Compile the code with nvidia and cpu targets
module load cuda
clang++ -std=c++17 -O3 -fsycl -fsycl-targets=nvptx64-nvidia-cuda,spir64_x86_64 -Xsycl-target-backend=nvptx64-nvidia-cuda --cuda-gpu-arch=sm_80 <sycl_code>.cpp
//Run the code
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You can download Intel® oneAPI Base Toolkit which includes the Intel OpenCL™ CPU runtime or visit the Intel® CPU Runtime for OpenCL™ Applications with SYCL support page to download the Intel OpenCL™ CPU runtime standalone package.
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you for your reply.
I am not able to that because I do not have a root access. Unless there is a way to install packages for a specific user.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Installation does not require root privileges. Did you encounter any problems during installation without root access?
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
Since the intructions had `sudo` in them I assumed it will only work with root.
I managed to install it as a normal user using:
./l_BaseKit_p_2024.0.1.46_offline.sh -a -s --eula accept --download-cache <some/path>/tttt/ --install-dir <another/path>/intel
Then I ran the script:
source setvars.sh
Then I tried a simple code, just getting the properties of the default device:
icpx -fsycl -fsycl-targets=nvidia_gpu_sm_80,spir64_x86_64 --cuda-path=$CUDA_HOME -L $GCC_INSTALL_ROOT/lib64 enumerate_gpu.cpp
But then when I am running with 1 core and 1 nvidia gpu I get this:
srun --time=00:15:00 --partition=gputest --account=project_2008874 --nodes=1 --ntasks-per-node=1 --cpus-per-task=1 --gres=gpu:a100:1 ./a.out
srun: job 3099485 queued and waiting for resources
srun: job 3099485 has been allocated resources
0 Devices
Offload Device : AMD EPYC 7H12 64-Core Processor
max_compute_units : 1
max_work_group_size : 8192
So at the moment I have oneapi running on the cpus in a folder and another installation of the `intell llvm` in another folder which supports the nvidia gpus. I wonder if I can install the oneapi with both in the same installation.
Cristian
Edit:
I tried different key work for targets and I get this error:
$ icpx -fsycl -fsycl-targets=nvidia_cuda,spir64_x86_64 --cuda-path=$CUDA_HOME -L $GCC_INSTALL_ROOT/lib64 enumerate_gpu.cpp
icpx: error: SYCL target is invalid: 'nvidia_cuda'
PLEASE append the compiler options "-save-temps -v", rebuild the application to to get the full command which is failing and submit a bug report to https://software.intel.com/en-us/support/priority-support which includes the failing command, input files for the command and the crash backtrace (if any).
Stack dump:
0. Program arguments: /scratch/project_2008874/cristian/intel/compiler/2024.0/bin/compiler/clang++ @/local_scratch/cristian/icpx055616110151KgL6/icpxargEl3lTd
1. Compilation construction
2. Building compilation actions
#0 0x0000556d9d71a773 llvm::sys::PrintStackTrace(llvm::raw_ostream&, int) (/scratch/project_2008874/cristian/intel/compiler/2024.0/bin/compiler/clang+++0x5144773)
#1 0x0000556d9d718c60 llvm::sys::RunSignalHandlers() (/scratch/project_2008874/cristian/intel/compiler/2024.0/bin/compiler/clang+++0x5142c60)
#2 0x0000556d9d71adf4 SignalHandler(int) Signals.cpp:0:0
#3 0x00007fbf7c9a1ce0 __restore_rt (/lib64/libpthread.so.0+0x12ce0)
#4 0x0000556d9e2c8352 (anonymous namespace)::OffloadingActionBuilder::SYCLActionBuilder::getDeviceDependences(clang::driver::OffloadAction::DeviceDependences&, clang::driver::phases::ID, clang::driver::phases::ID, llvm::SmallVectorImpl<clang::driver::phases::ID> const&) Driver.cpp:0:0
#5 0x0000556d9e2b0bdd (anonymous namespace)::OffloadingActionBuilder::addDeviceDependencesToHostAction(clang::driver::Action*, llvm::opt::Arg const*, clang::driver::phases::ID, clang::driver::phases::ID, llvm::SmallVectorImpl<clang::driver::phases::ID> const&) Driver.cpp:0:0
#6 0x0000556d9e2a3536 clang::driver::Driver::BuildActions(clang::driver::Compilation&, llvm::opt::DerivedArgList&, llvm::SmallVector<std::pair<clang::driver::types::ID, llvm::opt::Arg const*>, 16u> const&, llvm::SmallVector<clang::driver::Action*, 3u>&) const (/scratch/project_2008874/cristian/intel/compiler/2024.0/bin/compiler/clang+++0x5ccd536)
#7 0x0000556d9e29df31 clang::driver::Driver::BuildCompilation(llvm::ArrayRef<char const*>) (/scratch/project_2008874/cristian/intel/compiler/2024.0/bin/compiler/clang+++0x5cc7f31)
#8 0x0000556d9c48c940 clang_main(int, char**, llvm::ToolContext const&) (/scratch/project_2008874/cristian/intel/compiler/2024.0/bin/compiler/clang+++0x3eb6940)
#9 0x0000556d9c49bdde main (/scratch/project_2008874/cristian/intel/compiler/2024.0/bin/compiler/clang+++0x3ec5dde)
#10 0x00007fbf7add6cf3 __libc_start_main (/lib64/libc.so.6+0x3acf3)
#11 0x0000556d9c48b5a9 _start (/scratch/project_2008874/cristian/intel/compiler/2024.0/bin/compiler/clang+++0x3eb55a9)
icpx: error #10106: Fatal error in /scratch/project_2008874/cristian/intel/compiler/2024.0/bin/compiler/clang++, terminated by segmentation violation
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If you install the oneAPI toolkit, it doesn't support Nvidia GPU. But if you build from the intel llvm open source, I think it can support both intel CPU and nvidia GPU. BTW, I noticed that your CPU is AMD, we cannot make sure that Intel OpenCL CPU RT works correctly on AMD CPU.
For the compilation error, please use option "-fsycl-targets=nvptx64-nvidia-cuda" to build the code for nvidia GPU.
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
So in the end things do work for non-root. For anyone who might get lost like me.
The procedure is to install oneapi base toolkit and then the code plug-.in. For example:
//Download oneapi offline
wget https://registrationcenter-download.intel.com/akdlm/IRC_NAS/163da6e4-56eb-4948-aba3-debcec61c064/l_BaseKit_p_2024.0.1.46_offline.sh
//Install in silent mode in a non-standard folder
chmod +x l_BaseKit_p_2024.0.1.46_offline.sh
./l_BaseKit_p_2024.0.1.46_offline.sh -a -s --eula accept --download-cache <temp_path>/tttt/ --install-dir <path>/intel/oneapi
//Download plugin for nvidia
curl -LOJ "https://developer.codeplay.com/api/v1/products/download?product=oneapi&variant=nvidia&version=2024.0.1&filters[]=12.0&filters[]=linux"
//Install in the oneapi folder
./oneapi-for-nvidia-gpus-2024.0.1-cuda-12.0-linux.sh -y --extract-folder <temp_path>/tttt/ --install-dir <path>/intel/oneapi
//For usage first set the environment right
. /scratch/project_2008874/cristian/intel/oneapi/setvars.sh --include-intel-llvm
// Compile the code with nvidia and cpu targets
module load cuda
clang++ -std=c++17 -O3 -fsycl -fsycl-targets=nvptx64-nvidia-cuda,spir64_x86_64 -Xsycl-target-backend=nvptx64-nvidia-cuda --cuda-gpu-arch=sm_80 <sycl_code>.cpp
//Run the code
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page