I am trying to build netcdf-126.96.36.199 with parallel support using Intel MPI Library-4.1 in order to build RegCM-188.8.131.52.
I have used following environment variables before running the configure command:
export CFLAGS=' -03 -xHost -ip -no-prec-div -static-intel'
export CXXFLAGS=' -03 -xHost -ip -no-prec-div -static-intel'
export FFFLAGS=' -03 -xHost -ip -no-prec-div -static-intel'
export CPP='mpiicc -E'
export CXXCPP='mpiicpc -E'
then I ran the configure command as
./configure --prefix=/export/home/dkumar/RLIB CPPFLAGS=-I/export/home/dkumar/RLIB/include LDFLAGS=-L/export/home/dkumar/RLIB/lib LIBS="-ldl" LIBS="-L/export/home/dkumar/RLIB/lib -lhdf5_hl -lhdf5 -lz -lsz" --enable-large-file-tests --enable-parallel-tests --enable-netcdf-4 --enable-shared
the configure and make command completed successfully without any error however, but make check suggested that one of the tests run_par_test.sh failed. I am not able to understand what went wrong, but then I am able to run make install without any error.
Please help me in correcting this error. As I am new to the HPC systems so I could not figure it out my own.
the error file test-suite.log and config.log has been attached for the reference.
Any help would be appreciated
Thanks and Regards
As far as I see there's the following error in the test-suite.log:
open_hca: device mlx4_0 not found
This means that for some reasons mlx4_0 InfiniBand* (IB) device is missing on the system - you can check the list of available IB devices with ibv_devices/ibv_devinfo utilities (if available). This may be because openibd service isn't run - please check it.
Thanks for the reply. You are correct, probably I don't have infiniband configured in my machine, as the utilities mentioned by you are not available. Furthermore , Is it possible to run the parrallel mode of netcdf without Infiniband setup ?
In this case you can try to run your MPI application via TCP - just specify the variable I_MPI_FABRICS=shm:tcp (shared memory as intra-node fabric and TCP as inter-nodes fabric). See the Intel® MPI Library for Linux* OS Reference Manual for details about this variable.
Make sure that I_MPI_FABRICS variable hasn't already been set in your scripts.