I develop app that requires data to be transferred back to host almost after each kernel call (some flag returned).
Usuall I do processing in such way:
So, queue synched on blocking read. This works OK on AND GPUs/APUs with few % CPU load, but on Intel GPU this leads to constant 100% CPU usage (app fully use 1 CPU core constantly).
When I tried such sequence:
CPU load was dropped considerably. So, looks like synching on clFinish() and on blocking buffer read works quite different for Intel OpenCL runtime. Why so? Does this in agreement with OpenCL standart ?
Well, things look even more strange actually.
CPU usage changes (decreases) when I put additional synching points (i.e., clFinish(cq); calls) even between kernel enqueues.
will consume more CPU (but with less overall execution time, kernels executed on GPU of course) than
Any comments from OpenCL runtime developing team?
Both the blocking read and the clFinish() have similar performance. The behavior is not identical but you shouldn't see too much perf difference. Is it possinble to provide a repro?