- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello
When I run LuxMark v3.1 (I tried it on Windows) with the latest OpenCL CPU runtime (2023.0.0.25922) the application crashes at the compiling kernels step. The same machine equipped with a Celeron G6900 (12th gen Alter Lake) just works.
I also tested the previous version 2022.2.1.19741 with the same result. I also installed the CPU runtime from all documented sources:
- manually from oneAPI github
- downloaded both version (the simple version and the SYCL support version) from intel
- form the oneAPI Base Toolkit installation
I always get the same crash with running LuxMark.
I found that manually setting CL_CONFIG_CPU_TARGET_ARCH=core-avx2 on the command line before running LuxMark will make it work again.
Can Intel please make the CPU runtime work again by default ? Is it possible the runtime chooses some wrong architecture ?
As a side note, FAHBench tends to crash on all processors when I try to the use it with the OpenCL CPU runtime, but not with GPU runtimes ... so it would be nice to have a fix or work-around for it as well.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Same problem on Fedora Linux 38 with latest driver: version 2022.15.12.0.01_081451 (or intel-opencl package version 22.53.25242.13):
[timothy@DESKTOP-P1MNFL1 luxmark-v3.1]$ ./luxmark
**Internal compiler error** Do not know how to split this operator's operand!
Please report the issue on Intel OpenCL forum
https://software.intel.com/en-us/forums/opencl for assistance.
./luxmark: line 12: 12076 Aborted (core dumped) ./luxmark.bin "$@"
And if I manually set CL_CONFIG_CPU_TARGET_ARCH=core-avx2 then it works. But the performance is very low. With this variable set I get a LuxMark score of 11600 on Windows for my i9-13900K, out of a socre 16200 with Native C++ (without using OpenCL). For Linux I get a score of 11100 out of 22100 native.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We will try to reproduce this issue and give you feedback.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I confirmed that the next version (oneAPI 2023.1 release) fixed the failure. Please try it and notify us if you still meet failure or performance issue.
The potential reason is that there are some limitation to support Raptor Lake in the CodeGen we used in OCL CPU RT.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page