- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The performance of Xeon Phi benchmarked with 2D convolution in opnecl seems much better than an openmp implementation even with compiler-enabled vectorization. Openmp version was run in phi native mode, and timing measured only computation part: For-loop. For the opencl implementation, timing was only for kernel computation as well: no data transfer included. OpenMp-enbaled version was tested with 2,4,60,120,240 threads. - 240 threads gave the best performance for a balanced thread affinity setting. But Opencl was around 17x better even for the 240-thread openmp baseline with pragma-enbled vectorization is source code. Input image size is for 1024x1024 up to 16384x16384, and filter size of 3x3 up to 17x17. In call runs, opencl was better than openmp. Is this an expected speedup of opencl?? Seems too good to be true.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
EDIT:
Compilation (openmp)
icc Convolve.cpp -fopenmp -mmic -O3 -vec-report1 -o conv.mic Convolve.cpp(24): (col. 17) remark: LOOP WAS VECTORIZED
Source (Convole.cpp):
void Convolution_Threaded(float * pInput, float * pFilter, float * pOutput, const int nInWidth, const int nWidth, const int nHeight, const int nFilterWidth, const int nNumThreads) { #pragma omp parallel for num_threads(nNumThreads) for (int yOut = 0; yOut < nHeight; yOut++) { const int yInTopLeft = yOut; for (int xOut = 0; xOut < nWidth; xOut++) { const int xInTopLeft = xOut; float sum = 0; for (int r = 0; r < nFilterWidth; r++) { const int idxFtmp = r * nFilterWidth; const int yIn = yInTopLeft + r; const int idxIntmp = yIn * nInWidth + xInTopLeft; #pragma ivdep //discards any data dependencies assumed by compiler #pragma vector aligned //all data accessed in the loop is properly aligned for (int c = 0; c < nFilterWidth; c++) { const int idxF = idxFtmp + c; const int idxIn = idxIntmp + c; sum += pFilter[idxF]*pInput[idxIn]; } } const int idxOut = yOut * nWidth + xOut; pOutput[idxOut] = sum; } } }
Result of OpenMP (in comparison with OpenCL):
image filter exec Time (ms) OpenMP 2048x2048 3x3 23.4 OpenCL 2048x2048 3x3 1.04*
*Raw kernel execution time. Data transfer time over PCI bus not included.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
**sorry for the authoring error on my side. Please kindly see comment before.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
OpenMP: was it the first executed pragma omp parallel in your program? If yes, it may include workers creation time. Run some simple omp parallel for with the same number of workers before the measurement to ensure workers already exists
-> The timing is before the omp pragma, so it includes the worker creation. Actually the entire function is encapsulated in a method.
startTiming;
Conv();
stopTiming;
void Conv(float * pInput, float * pFilter, float * pOutput, const int nInWidth, const int nWidth, const int nHeight, const int nFilterWidth, const int nNumThreads) { #pragma omp parallel for num_threads(nNumThreads) for (int yOut = 0; yOut < nHeight; yOut++) { const int yInTopLeft = yOut; for (int xOut = 0; xOut < nWidth; xOut++) { const int xInTopLeft = xOut; float sum = 0; for (int r = 0; r < nFilterWidth; r++) { const int idxFtmp = r * nFilterWidth; const int yIn = yInTopLeft + r; const int idxIntmp = yIn * nInWidth + xInTopLeft; #pragma ivdep //discards any data dependencies assumed by compiler #pragma vector aligned //all data accessed in the loop is properly aligned for (int c = 0; c < nFilterWidth; c++) { const int idxF = idxFtmp + c; const int idxIn = idxIntmp + c; sum += pFilter[idxF]*pInput[idxIn]; } } const int idxOut = yOut * nWidth + xOut; pOutput[idxOut] = sum; } } }
OpenCL: How did you measure? Try the host time difference method with NDRange of interest surrounded with clFinish.
-> Used host side timing.
startTiminig;
clEnqueueNDRangeKernel();
clFinish();
stopTiming;
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
PS: number of iterations(runs) is set high enough, e.g 25 or 50, and then averaged to get the execution. This should take care of warming up of threads involved with the first iteration.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Lol! ok. Phi is running in native mode, and does not indeed mix up with the separate OpenCL process. :)
Thanks

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page