Maybe this is just in my head, but did the Intel Fortran traceback output change recently? It seemed like we used to get tracebacks that looked like what is seen on this page:
forrtl: error (72): floating overflow Image PC Routine Line Source ovf.exe 08049E4A MAIN__ 14 ovf.f90 ovf.exe 08049F08 Unknown Unknown Unknown ovf.exe 400B3507 Unknown Unknown Unknown
but instead we are seeing a lot of:
[borgr168:11828:0:11828] Caught signal 8 (Floating point exception: floating-point invalid operation) ==== backtrace ==== 0 /usr/lib64/libucs.so.0(+0x1935c) [0x2aab7485b35c] 1 /usr/lib64/libucs.so.0(+0x19613) [0x2aab7485b613] 2 /gpfsm/dswdev/bmauer/models/GEOSadas-5_12_4_p23_SLES12_M2-OPS/GEOSadas/Linux/bin/GEOSgcm.x() [0x430f16a] 3 /gpfsm/dswdev/bmauer/models/GEOSadas-5_12_4_p23_SLES12_M2-OPS/GEOSadas/Linux/bin/GEOSgcm.x() [0x40f4d33] 4 /gpfsm/dswdev/bmauer/models/GEOSadas-5_12_4_p23_SLES12_M2-OPS/GEOSadas/Linux/bin/GEOSgcm.x() [0x3fefafc] 5 /gpfsm/dswdev/bmauer/models/GEOSadas-5_12_4_p23_SLES12_M2-OPS/GEOSadas/Linux/bin/GEOSgcm.x() [0xd724458] 6 /gpfsm/dswdev/bmauer/models/GEOSadas-5_12_4_p23_SLES12_M2-OPS/GEOSadas/Linux/bin/GEOSgcm.x() [0xd726699] 7 /gpfsm/dswdev/bmauer/models/GEOSadas-5_12_4_p23_SLES12_M2-OPS/GEOSadas/Linux/bin/GEOSgcm.x() [0xd954e7b]
Now, it is entirely possible this might be due to a system change (we recently upgraded from SLES 11 to SLES 12), so perhaps a system library is missing? I'm not sure though, as I can use the toy example on that webpage above and I can get the "desired" output on both OSs.
So perhaps something in the way we are using Intel Fortran/Intel MPI is causing this? We are generally running Intel 18.0.5 with Intel MPI 19.1.0 or even Intel 19.1.0 with Intel MPI 19.1.0. Or maybe a module we also have loaded (say, for gcc 6.5) might cause it?
Huh. Do you have any idea why gcc would be highjacking the trace? I do have a gcc module loaded but only because (I think?) we need it for icpc or icc. But when we make, we are definitely Intel all the way:
-- The Fortran compiler identification is Intel 188.8.131.5280823 -- The CXX compiler identification is Intel 184.108.40.20680823 -- The C compiler identification is Intel 220.127.116.1180823 -- Check for working Fortran compiler: /usr/local/intel/2018/compilers_and_libraries_2018.5.274/linux/bin/intel64/ifort -- Check for working Fortran compiler: /usr/local/intel/2018/compilers_and_libraries_2018.5.274/linux/bin/intel64/ifort - works -- Detecting Fortran compiler ABI info -- Detecting Fortran compiler ABI info - done -- Checking whether /usr/local/intel/2018/compilers_and_libraries_2018.5.274/linux/bin/intel64/ifort supports Fortran 90 -- Checking whether /usr/local/intel/2018/compilers_and_libraries_2018.5.274/linux/bin/intel64/ifort supports Fortran 90 - yes -- Check for working CXX compiler: /usr/local/intel/2018/compilers_and_libraries_2018.5.274/linux/bin/intel64/icpc -- Check for working CXX compiler: /usr/local/intel/2018/compilers_and_libraries_2018.5.274/linux/bin/intel64/icpc - works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done -- Check for working C compiler: /usr/local/intel/2018/compilers_and_libraries_2018.5.274/linux/bin/intel64/icc -- Check for working C compiler: /usr/local/intel/2018/compilers_and_libraries_2018.5.274/linux/bin/intel64/icc - works
I am very weak when it comes to Linux, but I'm guessing that some library routine you called (what is libucs?) is where the error occurred and it had code to set up gcc's error handling.
Looks UCX (http://www.openucx.org) provides it. Which, I guess means MPI had a bad time, but when an MPI application crashes, MPI usually fails with it...
We are using Intel MPI, so it must be finding this...but I'm also running on an Omnipath system where libfabric should not care about the Mellanox-oriented UCX... grah.
Floating invalid suggests that some previous operation resulted in a NaN. Try compiling with -fpe0 and see if anything shakes out.