- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
however, it runs for 90 seconds on a 4 core xeon (3ghz) versus 2 seconds on a single core machine.
any hints greatly appreciated.
Tom
#include
#include
#include
#include
#define N 1000
#define CHUNKSIZE 25
main () {
time_t sec1;
time_t sec2;
sec1 = time(NULL);
printf("start \n");
int i, chunk;
float a
float b
float c
int j;
float k;
for (i=0; i < N; i++)
a = b = i * 1.0;
chunk = CHUNKSIZE;
#pragma omp parallel for private(i,j,k) schedule(static,chunk)
for (i=0; i < N; i++) {
for (j = 0; j<200000; j++) {
k = rand();
}
// c = a + b;
}
sec2 = time(NULL) - sec1;
printf("%ld seconds", sec2);
return 0;
}
compiled using 'gcc -O3 -fopenmp workshare2.c -o workshare2' on gcc 4.3.2 on opensuse64 11.1
コピーされたリンク
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
however, it runs for 90 seconds on a 4 core xeon (3ghz) versus 2 seconds on a single core machine.
any hints greatly appreciated.
[code section excised for sanity]
compiled using 'gcc -O3 -fopenmp workshare2.c -o workshare2' on gcc 4.3.2 on opensuse64 11.1
The core of your problem is probably here:
[cpp]#pragma omp parallel for private(i,j,k) schedule (static,chunk)
for (i=0; i < N; i++) {
for (j = 0; j<200000; j++) {
k = rand();
}
// c = a + b;
}[/cpp]
Though rand() not required to be reentrant and therefore not required to be thread safe (see http://www.opengroup.org/onlinepubs/000095399/functions/rand.html), the fact is that some implementations provide thread safety by putting a lock in the function, which probablymeans that all those parallel invocations of rand() from the various threads are being serialized. That could go a long way to explaining the slowdown you report.
For future reference, you might consider timingjust the code you're testing for parallel performance, rather than including the serial initialization section as part of the timed section as is done in this example.
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
the fact is that some implementations provide thread safety by putting a lock in the function
Sane implementations (Microsoft Visual C++) provide thread-safety by placing all the data to thread-local storage (TLS). This is a bit sub-optimal, but provides perfect scaling.
You may consider using well-designed self-contained random generator (like the one in boost), so that you will be able to create generator per thread on the stack.
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
In this case, it's not at all clear what the original poster was getting at. It's certainly not a normal usage of openmp. Maybe he wanted to see whether OpenMP inhibits the compiler from eliminating redundant loops.
- 新着としてマーク
- ブックマーク
- 購読
- ミュート
- RSS フィードを購読する
- ハイライト
- 印刷
- 不適切なコンテンツを報告
Normally, with balanced work among chunks, the largest possible chunk size will be superior
Only when all cores/HW threads are dedicated to running your app. When anthing else is running on the system then smaller chunk size may be superior. Similar situation with nested levels and/or when using NOWAIT.
Jim Dempsey