I have implemented a straightaway naive matrix
multiplication in OpenCL with AMD SDK. I get Speedup of around 16 for
just an 8-core CPU system while I only run it on CPUs. I have applied
some popular optimizations like utilizing private memory and local
memory optimizations, and grouping my matrix in one dimension so I use
both global and local dimension sizes. Now I get Speedup of around 24
with same 8-core CPU.
First I wonder this much speedup because for
8-cores I normally get around or less than 8 speedup with OpenMP for
example. so these figures of 16 and 24 amaze me how its possible?
these local + private memory and grouping of work items are
optimizations that I heard are only for GPUs and arent for CPUs so I
again wonder how I get so much boost in speedup when I run it only on
Thirdly, I wonder how local and private memory and grouping
are handled for CPUs as they cause speedup, caches or processor
registers or what? Because this is magic to get so much speedup...
I also want to know what are CPU specific optimizations in OpenCL ?
help me clarify because I am so new to OpenCL and its giving me so big
performance I cant beleive it, I have verified results and they are
perfectly accurate. Thanks in advance
Maybe because OpenMP does not optimize your code for parallelization, like using SIMD instructions :-) Also, as you say, you play with different kind of memory, it is very important even on CPU (Like avoiding switching the registers etc...)
It depends how you port your code to OpenCL. But major factors could SIMD utilization mention by Polar01. Another reason could be better cache utilization. when you use local memories and all mapped to L1 cache, you might have significant speed up.
However, all of this are only assumptions and we can't comment on AMD SDK.
But is SIMD utilization or auto-vectorization possible if I havent used OpencL vectors for example? Also local/private memory can boost speedup on CPUs? I am confused because someone told me that for device CPUs there is no local memory in OpenCL so no benefit, and that it only gives performance for GPUs...
CPU does have local memories and those are caches. When properly used they can gain significant speed up.
As alredy said, it's very difficult to understand from where coming the performance numbers w/o touching the code.
First of all try to understand code correctness, in some case you can have speed up because MT code produces different results.
You can try to use Intel OpenCL SDK and together with VTune Amplifier (http://software.intel.com/en-us/articles/intel-vtune-amplifier-xe/), which is supported by SDK, you can tryunderstand what is the real reason.
I also can reference you to Intel performance guidelines document (http://www.intel.com/Assets/PDF/manual/248966.pdf) which can provide few answers.