Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2159 Discussions

MPI (?) "crashing" if more than 4 cores used (Intel compiler)

GervyINT
Novice
2,108 Views

Hi,

I use WRF-ARW model built by Intel ifort/icc compiler. In order to compile WRF, netcdf and mpich also are necessary, compiled by the same compiler.

In particular, it is possible to compile WRF using smpar (shared-memory parallel) or dmpar (distribuited-memory parallel). Running WRF by smpar --> ok. But, I use dmpar, if I try to use more than 4 cores WRF does not start (and no log files) and the terminal crashes and I cannot obtain any information! If I set exactly 4 cores WRF running ok.

I have a Linux machine (kernel 5.12.8-300.fc34.x86_64) on AMD Ryzen 9 5900X 12-Core Processor. I built everything by last Intel compile (icc/ifort). MPI version is mpich-3.4.2.

Pay attention: compiling and using smpar it is ok. Compiling by GNU compiler both dmpar than smpar are ok.

Thanks for your support, any suggestions are welcome.

PS: trying the same thing on another computer, which has the same linux kernel, but intel instead of amd processor, this "strange problem" doesn't happen ...

Labels (1)
0 Kudos
7 Replies
ShivaniK_Intel
Moderator
2,081 Views

Hi,


Thanks for reaching out to us.


Could you please provide us the versions of Intel ifort/icc compilers?


Could you also provide us the commands used for compilation and execution?


Thanks & Regards

Shivani


0 Kudos
GervyINT
Novice
2,074 Views

Hi! Thank for your reply.

 

I have:

icc version 2021.2.0 (gcc version 11.1.1 compatibility)

ifort version 2021.2.0

 

Tipically, regarding the compilation I use the configuration files provided with the packages of the Netcdf, MPI and WRF model, and created when you launch ./configure program .

Then, usually, I run the wrf model like this: mpiexec -n <cores number> -host localhost wrf.exe

 

Fabio.

0 Kudos
ShivaniK_Intel
Moderator
2,051 Views

Hi,


Could you please provide us the screenshot of options available for you to select the smpar and dmpar numbers while configuring the WRF?


What option did you choose during the configuration of WRF? Example: 57.(smpar) 58. (dmpar) 59. (dm+sm)


Are you facing a similar issue with the Intel MPI?


Thanks & Regards

Shivani


0 Kudos
GervyINT
Novice
2,044 Views

Hi Shivani!

 

First of all, I compile using the following env var:

export CC=icc
export FC=ifort
export F9X=ifort
export F90=ifort
export F77=ifort
export CXX=icpc
export CFLAGS='-O3 -ip -no-prec-div'
export FFLAGS='-O3 -ip -no-prec-div'
export CXXFLAGS='-O3 -ip -no-prec-div'
export CPP='icc -E'
export CXXCPP='icpc -E'

export F90=''
export F90FLAGS=''

export WRFIO_NCD_NO_LARGE_FILE_SUPPORT=1
export WRF_EM_CORE="1"
unset limits
export MP_STACK_SIZE=64000000
ulimit -s unlimited

 

And:

source /opt/intel/oneapi/setvars.sh

 

Then, into WRF directory, when I ./configure, I obtain:

(...)

1. (serial) 2. (smpar) 3. (dmpar) 4. (dm+sm) PGI (pgf90/gcc)
5. (serial) 6. (smpar) 7. (dmpar) 8. (dm+sm) PGI (pgf90/pgcc): SGI MPT
9. (serial) 10. (smpar) 11. (dmpar) 12. (dm+sm) PGI (pgf90/gcc): PGI accelerator
13. (serial) 14. (smpar) 15. (dmpar) 16. (dm+sm) INTEL (ifort/icc)
17. (dm+sm) INTEL (ifort/icc): Xeon Phi (MIC architecture)
18. (serial) 19. (smpar) 20. (dmpar) 21. (dm+sm) INTEL (ifort/icc): Xeon (SNB with AVX mods)
22. (serial) 23. (smpar) 24. (dmpar) 25. (dm+sm) INTEL (ifort/icc): SGI MPT
26. (serial) 27. (smpar) 28. (dmpar) 29. (dm+sm) INTEL (ifort/icc): IBM POE
30. (serial) 31. (dmpar) PATHSCALE (pathf90/pathcc)
32. (serial) 33. (smpar) 34. (dmpar) 35. (dm+sm) GNU (gfortran/gcc)
36. (serial) 37. (smpar) 38. (dmpar) 39. (dm+sm) IBM (xlf90_r/cc_r)
40. (serial) 41. (smpar) 42. (dmpar) 43. (dm+sm) PGI (ftn/gcc): Cray XC CLE
44. (serial) 45. (smpar) 46. (dmpar) 47. (dm+sm) CRAY CCE (ftn $(NOOMP)/cc): Cray XE and XC
48. (serial) 49. (smpar) 50. (dmpar) 51. (dm+sm) INTEL (ftn/icc): Cray XC
52. (serial) 53. (smpar) 54. (dmpar) 55. (dm+sm) PGI (pgf90/pgcc)
56. (serial) 57. (smpar) 58. (dmpar) 59. (dm+sm) PGI (pgf90/gcc): -f90=pgf90
60. (serial) 61. (smpar) 62. (dmpar) 63. (dm+sm) PGI (pgf90/pgcc): -f90=pgf90
64. (serial) 65. (smpar) 66. (dmpar) 67. (dm+sm) INTEL (ifort/icc): HSW/BDW
68. (serial) 69. (smpar) 70. (dmpar) 71. (dm+sm) INTEL (ifort/icc): KNL MIC
72. (serial) 73. (smpar) 74. (dmpar) 75. (dm+sm) FUJITSU (frtpx/fccpx): FX10/FX100 SPARC64 IXfx/Xlfx

 

and I choose 15. (dmpar).

 

Regarding MPI, I only use mpich downloaded from the mpich.org site, at the moment V3.4.2: http://www.mpich.org/static/downloads/3.4.2/mpich-3.4.2.tar.gz

 

I hope I have answered what you asked me, I am available for further information.

 

Fabio

0 Kudos
ShivaniK_Intel
Moderator
2,004 Views

Hi,


We are working on it and will get back to you soon.


Thanks & Regards

Shivani



0 Kudos
JyotsnaK_Intel
Moderator
1,935 Views

Thank you for your inquiry. We offer support for hardware platforms that the Intel® oneAPI product supports. These platforms include those that are part of the Intel® Core™ processor family or higher, the Intel® Xeon® processor family, the Intel® Xeon® Scalable processor family, and others which can be found here – Intel® oneAPI Base Toolkit System Requirements, Intel® oneAPI HPC Toolkit System Requirements, Intel® oneAPI IoT Toolkit System Requirements

If you wish to use oneAPI on hardware that is not listed at one of the sites above, we encourage you to visit and contribute to the open oneAPI specification - https://www.oneapi.io/spec/


0 Kudos
JyotsnaK_Intel
Moderator
1,912 Views

This thread will no longer be monitored by Intel. If you need further assistance, please post a new question.


0 Kudos
Reply