Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2159 Discussions

Building netcdf-4.3.3.1 with Intel MPI library with parallel support : FAIL: run_par_test.sh

Dhirendra_K_
Beginner
1,558 Views

Dear Support

I am trying to build netcdf-4.3.3.1 with parallel support using Intel MPI Library-4.1  in order to build RegCM-4.4.5.5.

I have used following environment variables before running the configure command:

export CC=mpiicc

export CXX=mpiicpc

export CFLAGS=' -03 -xHost -ip -no-prec-div -static-intel'

export CXXFLAGS=' -03 -xHost -ip -no-prec-div -static-intel'

export F77=mpiifort

export FC=mpiifort

export F90=mpiifort

export FFFLAGS=' -03 -xHost -ip -no-prec-div -static-intel'

export CPP='mpiicc -E'

export CXXCPP='mpiicpc -E'

then I ran the configure command as

./configure --prefix=/export/home/dkumar/RLIB CPPFLAGS=-I/export/home/dkumar/RLIB/include LDFLAGS=-L/export/home/dkumar/RLIB/lib LIBS="-ldl" LIBS="-L/export/home/dkumar/RLIB/lib -lhdf5_hl -lhdf5 -lz -lsz" --enable-large-file-tests --enable-parallel-tests --enable-netcdf-4 --enable-shared

the configure and make command completed successfully without any error however, but make check suggested that one of the tests run_par_test.sh failed. I am not able to understand what went wrong, but then I am able to run make install without any error.

Please help me in correcting this error. As I am new to the HPC systems so I could not figure it out my own.

the error file test-suite.log and config.log has been attached for the reference.

Any help would be appreciated

Thanks and Regards

Dhirendra

0 Kudos
3 Replies
Artem_R_Intel1
Employee
1,558 Views

Hello Dhirendra,

As far as I see there's the following error in the test-suite.log:

open_hca: device mlx4_0 not found

This means that for some reasons mlx4_0 InfiniBand* (IB) device is missing on the system - you can check the list of available IB devices with ibv_devices/ibv_devinfo utilities (if available). This may be because openibd service isn't run - please check it.

 

0 Kudos
Dhirendra_K_
Beginner
1,558 Views

Hii  Artem,

Thanks for the reply. You are correct, probably I don't have infiniband configured in my machine, as the utilities mentioned by you are not available. Furthermore , Is it possible to run the parrallel mode of netcdf without Infiniband setup ?

Thanks again...

Dhirendra

 

0 Kudos
Artem_R_Intel1
Employee
1,558 Views

Hello Dhirendra,

In this case you can try to run your MPI application via TCP - just specify the variable I_MPI_FABRICS=shm:tcp (shared memory as intra-node fabric and TCP as inter-nodes fabric). See the Intel® MPI Library for Linux* OS Reference Manual for details about this variable.
Make sure that I_MPI_FABRICS variable hasn't already been set in your scripts.

0 Kudos
Reply