Intel® oneAPI Data Parallel C++
Support for Intel® oneAPI DPC++ Compiler, Intel® oneAPI DPC++ Library, Intel ICX Compiler , Intel® DPC++ Compatibility Tool, and GDB*
584 Discussions

"Segmentation fault" in sycl::device::get_devices

baranovsky
Beginner
1,333 Views

The following simple program crashes with Segmentation fault in one of my environment.

#include <CL/sycl.hpp>

int main()
{
    for (auto dev : sycl::device::get_devices())
        std::cout << dev.get_info<sycl::info::device::name>() << std::endl;

    return 0;
}
  Error Environment
A. Segmentation fault

OS: Ubuntu 20.04 LTS (in docker)

HOST OS: CentOS Linux release 7.6.1810 (Core)

CPU: Intel Xeon Gold 6330 (dual CPU)

GPU: NVIDIA RTX A5000 (octa GPU)

B. No error

OS: Ubuntu 20.04 LTS

CPU: Intel Core i9-12900

GPU: NVIDIA RTX A6000 (dual GPU)

C. No error

OS: Ubuntu 20.04 LTS

CPU: Intel Core i7-11700F

GPU: NVIDIA RTX A6000

 

I used get_devices function to select multiple devices.

I doubt that the in-docker enviroment is not good for the DPC++ implementation, but it is not possible to change the HOST OS (currently Cent OS).

Is there any solution (or possible workaround)?

 

Stacktrace:

#0  0x00007ffff77ff722 in free () from /usr/lib/x86_64-linux-gnu/libc.so.6
#1  0x00007ffff7c6eb7d in ?? () from /usr/lib/x86_64-linux-gnu/libdl.so.2
#2  0x00007ffff7c6e3da in dlopen () from /usr/lib/x86_64-linux-gnu/libdl.so.2
#3  0x00007ffff7c63005 in cl::sycl::detail::pi::loadOsLibrary(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) () from /opt/intel/oneapi/compiler/2022.1.0/linux/lib/libsycl.so.5
#4  0x00007ffff7b3f252 in std::call_once<cl::sycl::detail::pi::initialize()::$_1>(std::once_flag&, cl::sycl::detail::pi::initialize()::$_1&&)::{lambda()#2}::__invoke() ()
   from /opt/intel/oneapi/compiler/2022.1.0/linux/lib/libsycl.so.5
#5  0x00007ffff77454df in __pthread_once_slow () from /usr/lib/x86_64-linux-gnu/libpthread.so.0
#6  0x00007ffff7b3ab3a in cl::sycl::detail::pi::initialize() () from /opt/intel/oneapi/compiler/2022.1.0/linux/lib/libsycl.so.5
#7  0x00007ffff7b888e7 in cl::sycl::detail::platform_impl::get_platforms() () from /opt/intel/oneapi/compiler/2022.1.0/linux/lib/libsycl.so.5
#8  0x00007ffff7c56409 in cl::sycl::platform::get_platforms() () from /opt/intel/oneapi/compiler/2022.1.0/linux/lib/libsycl.so.5
#9  0x00007ffff7c21a28 in cl::sycl::device::get_devices(cl::sycl::info::device_type) () from /opt/intel/oneapi/compiler/2022.1.0/linux/lib/libsycl.so.5
#10 0x0000000000401207 in main ()

 

Thank you for your help in advance!

0 Kudos
1 Solution
SeshaP_Intel
Moderator
1,020 Views

Hi,


Thanks for your patience while we were checking on this issue. 


We can only offer direct support for Intel hardware platforms that the Intel® oneAPI product supports. 

Intel provides instructions on how to compile oneAPI code for both CPU and a wide range of GPU accelerators. 

Please refer to the below link for more details.

https://intel.github.io/llvm-docs/GetStartedGuide.html


Thanks and Regards,

Pendyala Sesha Srinivas


View solution in original post

0 Kudos
7 Replies
SeshaP_Intel
Moderator
1,310 Views

Hi,


Thank you for posting in Intel Communities.


Could you please share the Docker file, dpcpp version, and the steps to reproduce the issue, so that we can investigate more from our end?


Thanks and Regards,

Pendyala Sesha Srinivas


0 Kudos
baranovsky
Beginner
1,265 Views

Thank you for your response!

As I've just informed that the oneAPI BaseKit is updated,

I will try with the latest version and share the result and my dockerfile.

0 Kudos
SeshaP_Intel
Moderator
1,215 Views

Hi,


We haven't heard back from you. Could you please provide an update on your issue?


Thanks and Regards,

Pendyala Sesha Srinivas


0 Kudos
baranovsky
Beginner
1,200 Views

Sorry for the late reply.

 

I've updated my Dockerfile to use the latest oneAPI (2022.3) and got the following result:

1. Start a docker container:

 

 

$ docker run --name RX --privileged --ulimit core=0 --hostname g04RX --shm-size=1024GB -dit --gpus all oneapi:cuda11.5.2-oneapi2022.3.0RX

 

 

2. connect the docker container:

 

 

$ docker exec -it RX /bin/bash

 

 

3. compile the 'test.cpp' and run:

 

 

# source /opt/intel/oneapi/setvars.sh
# export PATH=/opt/llvm/bin:$PATH
# export LD_LIBRARY_PATH=/opt/llvm/lib:$LD_LIBRARY_PATH
# dpcpp test.cpp
# ./a.out
Segmentation fault

 

 

and, I found that omitting line 3 (export LD_LIBRARY_PATH...), a.out run sccessfully.

But, as I would like to run my program on NVIDA GPUS with the llvm library, that line is essential for me...

 

My Dockerfile and build script are attached.

build.sh

docker build --target runner -t oneapi:cuda11.5.2-oneapi2022.3.0RX .
0 Kudos
SeshaP_Intel
Moderator
1,021 Views

Hi,


Thanks for your patience while we were checking on this issue. 


We can only offer direct support for Intel hardware platforms that the Intel® oneAPI product supports. 

Intel provides instructions on how to compile oneAPI code for both CPU and a wide range of GPU accelerators. 

Please refer to the below link for more details.

https://intel.github.io/llvm-docs/GetStartedGuide.html


Thanks and Regards,

Pendyala Sesha Srinivas


0 Kudos
SeshaP_Intel
Moderator
993 Views

Hi,


Has the information provided above helped? If yes, could you please confirm whether we can close this thread from our end?


Thanks and Regards,

Pendyala Sesha Srinivas


0 Kudos
SeshaP_Intel
Moderator
968 Views

Hi,


Thanks for accepting our solution. If you need any additional information, please post a new question as this thread will no longer be monitored by Intel.


Thanks and Regards,

Pendyala Sesha Srinivas


0 Kudos
Reply