Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.

OpenMP-MPI hybrid code - SIGSEGV

amit-amritkar
New Contributor I
1,708 Views
Hi,

I have a hybrid MPI-OpenMP fortran code which I run on 2 OpenMP threads and 1 MPI processor.
Everything works correctly. (The code is loop level parallelized.)

When I tried to run the code on 4 threads I got NANs in my solution.

I went through previous forum posts and found that if I include the flag -shared-intel -mcmodel=medium
then I can overcome the memory problem but now I run into following segmentation fault.
I also went through the segmentation error guide and couldn't find an answer to this problem.
(I have also tried setting the KMP_STACKSIZE to 25GB and ulimit -s unlimited)
btw, I use ifort 10.0.

amit@sys:~/openmp> mpirun -np 1 ./2009.x
MPI: On host sys, Program /openmp/2009.x, Rank 0, Process 26346 received signal SIGSEGV(11)


MPI: --------stack traceback-------
MPI: MPI_COMM_WORLD rank 0 has terminated without calling MPI_Finalize()
MPI: aborting job
MPI: Received signal 11

Thanks,
Amit
0 Kudos
2 Replies
TimP
Honored Contributor III
1,708 Views
Quoting - amit

I also went through the segmentation error guide and couldn't find an answer to this problem.
(I have also tried setting the KMP_STACKSIZE to 25GB and ulimit -s unlimited)
btw, I use ifort 10.0.

amit@sys:~/openmp> mpirun -np 1 ./2009.x
MPI: On host sys, Program /openmp/2009.x, Rank 0, Process 26346 received signal SIGSEGV(11)

You could try the -check and -g options to find out where you have problems. I wouldn't expect to find answers without following up Ron's suggestions about how to dig deeper. In principle, thread checker should be applicable, although it hasn't had much maintenance lately.
Why are you using such an old compiler?
When running 1 mpi process, it shouldn't matter which MPI you are using, but if your MPI is as old as your compiler, you are probably wasting your time. The current Intel and HP MPI have built-in support for hybrid.
0 Kudos
Ron_Green
Moderator
1,708 Views
Quoting - tim18
You could try the -check and -g options to find out where you have problems. I wouldn't expect to find answers without following up Ron's suggestions about how to dig deeper. In principle, thread checker should be applicable, although it hasn't had much maintenance lately.
Why are you using such an old compiler?
When running 1 mpi process, it shouldn't matter which MPI you are using, but if your MPI is as old as your compiler, you are probably wasting your time. The current Intel and HP MPI have built-in support for hybrid.

As Tim says, do try -g -traceback and see if anything comes from that.

In older MPI implementations I have seen errors where environment is not propagated to child processes spawned by mpirun. I would write a simple script:

#!/bin/bash
env

and run this under mpirun: mpirun -np 1 ./myscript

just to see if your stacksize and KMP variables are propagating. You could/should add those ulimit and KMP definitions to your login scripts .profile .cshrc or whatever ones you use.
0 Kudos
Reply