I'm having a very interesting problem on strictly one of my computers, and I'm sure it has to do with some kind of configuration but I'm not sure where to start. I'm working on a large simulation toolset written in fortran which requires parallelization. In the past I would usually debug in serial using GDB and then debug the parallel code using either GDB or including tests to track data flow. I'm not sure why, but I bricked my machine and when I set up all of the intel tools I can no longer run it through GDB, because the code starts multiple threads which causes failure because it is not designed for multiple threads. What I mean is if my program is called a.out then I can do:
mpiexec -np 1 ./a.out < input_file.in
However I cannot do:
./a.out < input_file.in or
gdb ./a.out < input_file.in
The code fails in both of the above cases because the input file is only directed to one of the threads and the other thread does not read anything and exits the file. Upon investigation in gdb I have found this behavior:
Breakpoint 1, main () at main.f90:19 19 call mpi_init(ierr) (gdb) n [New Thread 0x7ffff20a6700 (LWP 6838)] [New Thread 0x7ffff18a5700 (LWP 6839)] 21 call mpi_comm_rank(mpi_comm_world, rank, ierr) (gdb)
As you can see, the execution of the mpi_init statement causes two threads to start. Is there a way to configure the intel mpi library to not have this behavior? This hasn't been a problem before, and I have another slower machine with the intel toolset which does not exhibit this behavior, but I think I have configured both the same way. Thank you so much for the help.