OpenCL* for CPU
Ask questions and share information on Intel® SDK for OpenCL™ Applications and OpenCL™ implementations for Intel® CPU.
This forum covers OpenCL* for CPU only. OpenCL* for GPU questions can be asked in the GPU Compute Software forum. Intel® FPGA SDK for OpenCL™ questions can be ask in the FPGA Intel® High Level Design forum.
1720 Discussions

OpenCL CPU runtime crashes LuxMark v3.1 on 13900K




When I run LuxMark v3.1 (I tried it on Windows) with the latest OpenCL CPU runtime (2023.0.0.25922) the application crashes at the compiling kernels step. The same machine equipped with a Celeron G6900 (12th gen Alter Lake) just works.


I also tested the previous version 2022.2.1.19741 with the same result. I also installed the CPU runtime from all documented sources:

  • manually from oneAPI github
  • downloaded both version (the simple version and the SYCL support version) from intel
  • form the oneAPI Base Toolkit installation

I always get the same crash with running LuxMark.


I found that manually setting CL_CONFIG_CPU_TARGET_ARCH=core-avx2 on the command line before running LuxMark will make it work again.


Can Intel please make the CPU runtime work again by default ? Is it possible the runtime chooses some wrong architecture ?


As a side note, FAHBench tends to crash on all processors when I try to the use it with the OpenCL CPU runtime, but not with GPU runtimes ... so it would be nice to have a fix or work-around for it as well.

Labels (1)
0 Kudos
3 Replies

Same problem on Fedora Linux 38 with latest driver: version 2022. (or intel-opencl package version 22.53.25242.13):

[timothy@DESKTOP-P1MNFL1 luxmark-v3.1]$ ./luxmark
**Internal compiler error** Do not know how to split this operator's operand!

Please report the issue on Intel OpenCL forum for assistance. 
 ./luxmark: line 12: 12076 Aborted                 (core dumped) ./luxmark.bin "$@"

 And if I manually set CL_CONFIG_CPU_TARGET_ARCH=core-avx2 then it works. But the performance is very low. With this variable set I get a LuxMark score of 11600 on Windows for my i9-13900K, out of a socre 16200 with Native C++ (without using OpenCL). For Linux I get a score of  11100 out of 22100 native.

0 Kudos

We will try to reproduce this issue and give you feedback. 

0 Kudos

I confirmed that the next version (oneAPI 2023.1 release) fixed the failure.  Please try it and notify us if you still meet failure or performance issue.


The potential reason is that there are some limitation to support Raptor Lake in the CodeGen we used in OCL CPU RT.


0 Kudos