- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi all,
For some reason I can't get even the simplest openmp test program to work. I use the MKL pardiso solver in a big code, which runs fine in parallel using openmp; also all the omp_lib functions work fine, so I think all the linking is done right. However, when I want to parallelize any additional loops, the !$OMP lines simply don't seem to be recognize. Let me give the simplest example I've tried:
program hello90
use omp_lib
integer:: id, nthreads,k
!$omp parallel private(id,k)
!$omp do
do k=1,1000000
id = omp_get_thread_num()
write (*,*) K, 'Hello World from thread', id, omp_in_parallel()
if ( id == 0 ) then
nthreads = omp_get_num_threads()
write (*,*) 'There are', nthreads, 'threads'
end if
enddo
!$omp end do
!$omp end parallel
end program
I compile withifort -openmp -o test.e openmptest.f90 ; also OMP_NUM_THREADS is set to 2 (and verified that). The output on running it is:
./test.e
1 Hello World from thread 0 F
There are 1 threads
2 Hello World from thread 0 F
There are 1 threads
3 Hello World from thread 0 F
There are 1 threads
4 Hello World from thread 0 F
There are 1 threads
5 Hello World from thread 0 F
There are 1 threads
6 Hello World from thread 0 F
etc
I've tried different things as well. But in particular, I never getomp_in_parallel(). I don't know of any other ways to check whether the loop is indeed parallelized or not. [Oh, it should be: when compiling the output is:
openmptest.f90(5): (col. 10) remark: OpenMP DEFINED LOOP WAS PARALLELIZED.
openmptest.f90(4): (col. 10) remark: OpenMP DEFINED REGION WAS PARALLELIZED.
]
What am i doing wrong??
Thanks!
Link Copied
1 Reply
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Quoting - moortgatgmail.com
Hi all,
For some reason I can't get even the simplest openmp test program to work. I use the MKL pardiso solver in a big code, which runs fine in parallel using openmp; also all the omp_lib functions work fine, so I think all the linking is done right. However, when I want to parallelize any additional loops, the !$OMP lines simply don't seem to be recognize. Let me give the simplest example I've tried:
program hello90
use omp_lib
integer:: id, nthreads,k
!$omp parallel private(id,k)
!$omp do
do k=1,1000000
id = omp_get_thread_num()
write (*,*) K, 'Hello World from thread', id, omp_in_parallel()
if ( id == 0 ) then
nthreads = omp_get_num_threads()
write (*,*) 'There are', nthreads, 'threads'
end if
enddo
!$omp end do
!$omp end parallel
end program
I compile withifort -openmp -o test.e openmptest.f90 ; also OMP_NUM_THREADS is set to 2 (and verified that). The output on running it is:
./test.e
1 Hello World from thread 0 F
There are 1 threads
2 Hello World from thread 0 F
There are 1 threads
3 Hello World from thread 0 F
There are 1 threads
4 Hello World from thread 0 F
There are 1 threads
5 Hello World from thread 0 F
There are 1 threads
6 Hello World from thread 0 F
etc
I've tried different things as well. But in particular, I never getomp_in_parallel(). I don't know of any other ways to check whether the loop is indeed parallelized or not. [Oh, it should be: when compiling the output is:
openmptest.f90(5): (col. 10) remark: OpenMP DEFINED LOOP WAS PARALLELIZED.
openmptest.f90(4): (col. 10) remark: OpenMP DEFINED REGION WAS PARALLELIZED.
]
What am i doing wrong??
Thanks!
I ran this with ifort 11.0.074 and it worked. Did you export OMP_NUM_THREADS? On my system this defaults to 8 (the number of cores), but your system may be set to default to 1. If you just set OMP_NUM_THREADS in a shell, thenon execution it will default to the system default.
I used bash and did the following
export OMP_NUM_THREADS=2
./ompTest
This produced the desired output.
I believe the csh is "setenv OMP_NUM_THREADS 2".
If I did
OMP_NUM_THREADS=2
./ompTest
The result defaults to the number of cores.
Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page