- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
I was running a program on skylake nodes. If I run it using one node (np=2, ph=2), the program is able to complete successfully. However, if I run it using two nodes (np=2, ph=1), I would get the following assertion failure:
rank = 1, revents = 8, state = 8
Assertion failed in file ../../src/mpid/ch3/channels/nemesis/netmod/tcp/socksm.c at line 2988: (it_plfd->revents & POLLERR) == 0
internal ABORT - process 0
Does anyone know what are the possible causes for this type of assertion failure? Weird thing is: all my colleagues who are using csh can run the program reporting no error, but all other colleagues who are using bash (including me) always saw the same issue failed at the same line (2988).
- 태그:
- Cluster Computing
- General Support
- Intel® Cluster Ready
- Message Passing Interface (MPI)
- Parallel Computing
링크가 복사됨
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Intel mpiexec 2019.0.6 does not show option -ph.
Are you intending to use -ppn instead (Processes Per Node)?
Jim Dempsey
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hi
This type of error occurs when one of the MPI processes is terminated by a signal (for example, SIGTERM or SIGKILL) from TCP.
The reasons might be host reboot, receiving an unexpected signal, OOM manager errors and others.
Could you check whether you are able to ssh to other nodes?
Can you look into this thread once and see if this helps you https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology/topic/747448
Thanks
Prasanth
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Someone may close the ticket now. I found the issue was related to limited stack size. Some dynamic arrays are not passed into subroutines, resulting in them being allocated as static arrays in subroutines. This leads to memory issue and eventually crashes one of the nodes.
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hi,
Thanks for the confirmation. We will go ahead and close this thread. Feel free to reach out to us for more queries.
--Rahul
