Intel® Moderncode for Parallel Architectures
Support for developing parallel programming applications on Intel® Architecture.

openmp slower than single threaded

inttel
Beginner
1,261 Views
a program for the sole purpose of trying to demonstrate the advantage of using 4 cores simultaneously is below.

however, it runs for 90 seconds on a 4 core xeon (3ghz) versus 2 seconds on a single core machine.

any hints greatly appreciated.

Tom




#include
#include
#include
#include

#define N 1000
#define CHUNKSIZE 25

main () {
time_t sec1;
time_t sec2;
sec1 = time(NULL);
printf("start \n");

int i, chunk;
float a;
float b;
float c;
int j;
float k;


for (i=0; i < N; i++)
a = b = i * 1.0;
chunk = CHUNKSIZE;

#pragma omp parallel for private(i,j,k) schedule(static,chunk)
for (i=0; i < N; i++) {
for (j = 0; j<200000; j++) {
k = rand();
}
// c = a + b;
}


sec2 = time(NULL) - sec1;
printf("%ld seconds", sec2);
return 0;

}



compiled using 'gcc -O3 -fopenmp workshare2.c -o workshare2' on gcc 4.3.2 on opensuse64 11.1

0 Kudos
7 Replies
robert-reed
Valued Contributor II
1,261 Views
Quoting - inttel
a program for the sole purpose of trying to demonstrate the advantage of using 4 cores simultaneously is below.
however, it runs for 90 seconds on a 4 core xeon (3ghz) versus 2 seconds on a single core machine.
any hints greatly appreciated.

[code section excised for sanity]

compiled using 'gcc -O3 -fopenmp workshare2.c -o workshare2' on gcc 4.3.2 on opensuse64 11.1

The core of your problem is probably here:

[cpp]#pragma omp parallel for private(i,j,k) schedule (static,chunk) 
   for (i=0; i < N; i++) {
      for (j = 0; j<200000; j++) {
         k = rand(); 
      } 
      // c = a + b; 
   }[/cpp]

Though rand() not required to be reentrant and therefore not required to be thread safe (see http://www.opengroup.org/onlinepubs/000095399/functions/rand.html), the fact is that some implementations provide thread safety by putting a lock in the function, which probablymeans that all those parallel invocations of rand() from the various threads are being serialized. That could go a long way to explaining the slowdown you report.

For future reference, you might consider timingjust the code you're testing for parallel performance, rather than including the serial initialization section as part of the timed section as is done in this example.

0 Kudos
Dmitry_Vyukov
Valued Contributor I
1,261 Views

the fact is that some implementations provide thread safety by putting a lock in the function


Sane implementations (Microsoft Visual C++) provide thread-safety by placing all the data to thread-local storage (TLS). This is a bit sub-optimal, but provides perfect scaling.
You may consider using well-designed self-contained random generator (like the one in boost), so that you will be able to create generator per thread on the stack.



0 Kudos
Dmitry_Vyukov
Valued Contributor I
1,261 Views
There is another possible problem. 25 rand() calls per task can be too small to outweigh parallelization overheads. Work per task must be some 10'000 machine cycles with current tools.

0 Kudos
AndreyKarpov
New Contributor I
1,261 Views
0 Kudos
Tudor
New Contributor I
1,261 Views
Ok, I'm a noob at openmp but maybe specifying chunksize = 25 creates too many threads that choke your 4 cores. Try to only create a maximum of 2 * number of cores threads.
0 Kudos
TimP
Honored Contributor III
1,261 Views
Quoting - Tudor Serban
Ok, I'm a noob at openmp but maybe specifying chunksize = 25 creates too many threads that choke your 4 cores. Try to only create a maximum of 2 * number of cores threads.
No, setting chunk size doesn't affect the number of threads. However, you touch on a good point. Normally, with balanced work among chunks, the largest possible chunk size will be superior, at least when using static scheduling with affinity set.
In this case, it's not at all clear what the original poster was getting at. It's certainly not a normal usage of openmp. Maybe he wanted to see whether OpenMP inhibits the compiler from eliminating redundant loops.
0 Kudos
jimdempseyatthecove
Honored Contributor III
1,261 Views

Normally, with balanced work among chunks, the largest possible chunk size will be superior

Only when all cores/HW threads are dedicated to running your app. When anthing else is running on the system then smaller chunk size may be superior. Similar situation with nested levels and/or when using NOWAIT.

Jim Dempsey
0 Kudos
Reply