- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I develop app that requires data to be transferred back to host almost after each kernel call (some flag returned).
Usuall I do processing in such way:
enqueueKernel(cq,..);
readBuffer(cq,..true);
So, queue synched on blocking read. This works OK on AND GPUs/APUs with few % CPU load, but on Intel GPU this leads to constant 100% CPU usage (app fully use 1 CPU core constantly).
When I tried such sequence:
enqueueKernel(cq);
clFinish(cq);
readBuffer(cq,...,true,...);
CPU load was dropped considerably. So, looks like synching on clFinish() and on blocking buffer read works quite different for Intel OpenCL runtime. Why so? Does this in agreement with OpenCL standart ?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Well, things look even more strange actually.
CPU usage changes (decreases) when I put additional synching points (i.e., clFinish(cq); calls) even between kernel enqueues.
So,
clEnqueueNDRangeKernel(cq,kernel1,...);
clEnqueueNDRangeKernel(cq,kernel2,...);
will consume more CPU (but with less overall execution time, kernels executed on GPU of course) than
clEnqueueNDRangeKernel(cq,kernel1,...);
clFinish(cq);
clEnqueueNDRangeKernel(cq,kernel2,...);
Any comments from OpenCL runtime developing team?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Both the blocking read and the clFinish() have similar performance. The behavior is not identical but you shouldn't see too much perf difference. Is it possinble to provide a repro?
Thanks,
Raghu

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page