Intel® Moderncode for Parallel Architectures
Support for developing parallel programming applications on Intel® Architecture.
1697 Discussions

NAMD segmentation fault when running with mpirun(intel 2015)

psing51
New Contributor I
1,109 Views

I had compiled NAMD 2.10 using intel 2015 compiler suite. I saw the benchmark for intel MIC card , so i am trying to benchmark NAMD with host processor only configuration. Though the following seems to me application related issue , but as i have used intel compilers  , so i have posted this issue here.
Presently i am benchmarking the application on my system (CentOS 6.5 ,Xeon 2670 v3), though i will incrementally add optimization flags as per my architecture.

I am using apoa1.tar.gz example for benchmarking NAMD.
 
Now when i use:
  • mpirun -np 1 ./namd2 +ppn 23 +idlepoll ./apoa1/apoa1.namd

i get:

[0] Stack Traceback:
  [0:0] CmiAbortHelper+0x71  [0xf02481]
  [0:1] ConverseInit+0x30a  [0xf03a6a]
  [0:2] _ZN7BackEnd4initEiPPc+0x89  [0x612c09]
  [0:3] main+0x43  [0x60b5e3]
  [0:4] __libc_start_main+0xfd  [0x339c61ed5d]
  [0:5]   [0x568839]

===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   PID 48184 RUNNING AT shavak249
=   EXIT CODE: 139
=   CLEANING UP REMAINING PROCESSES
=   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault (signal 11)
  • mpirun -np 1 ./namd2 ./apoa1/apoa1.namd  @ Xeon 2670 V3
    WallClock: 517.594604  CPUTime: 517.594604  Memory: 469.996094 MB
    [Partition 0][Node 0] End of program
  • mpirun -np 8 ./namd2 ./apoa1/apoa1.namd
    WallClock: 77.789726  CPUTime: 77.789726  Memory: 335.437500 MB
    [Partition 0][Node 0] End of program
  • mpirun -np 23 ./namd2 ./apoa1/apoa1.namd
    WallClock: 40.764713  CPUTime: 40.764709  Memory: 464.347656 MB
    [Partition 0][Node 0] End of progra

Though i guess i am getting my task done with np flag of mpirun, but i want to know why the segfault occurs when ppn flag is used with mpirun. The bench mark here uses ppn flag hence , so i doubt now that weather i have built the binary correctly or not ( Attached : NAMD installation script)!
The attached script will help you to replicate the build procedure same as mine.

also when i ran the same code  on a different machine i get:
mpirun -np 1 ./namd2 ./apoa1/apoa1.namd  @ Xeon 2670 V2
WallClock: 470.422058  CPUTime: 470.422058  Memory: 467.085938 MB

I understand that the XeonV3 clocks @ 2.3Ghz and Xeon V2 @2.6 , so it would be great if you can suggest some intel compiler flags/changes in by namd compilation procedure so that the NAMD performance can improve by taking advantage of hardware @ Xeon 2670 V3 (more cores!!). I mean how can i specify MPI+OpenMP hybrid configuration  ?.

Eagerly awaiting your reply,

0 Kudos
1 Reply
TimP
Honored Contributor III
1,109 Views

This would be more topical on the hpc and cluster forum, as it doesn't involve Mic.  My own guess is that.ppn was needed only for some past release of mpi on multiple nodes. You may need to collect some reporting with i_mpi_debug set to see what the difference due to ppn might be.

On the v2, avx compiler option should give full performance, while avx2 may have an advantage on v3.  People with namd experience may know. 

0 Kudos
Reply