Intel® oneAPI HPC Toolkit
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
The Intel sign-in experience has changed to support enhanced security controls. If you sign in, click here for more information.
2020 Discussions

trivial code fails sometimes under SGE: HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_ poll.c:70): assert (!(pollfds[i].rev


A trivial ring-passing .f90 program fails to start 50% of the time on our cluster (SGE 6.2u5). The same problem occurs with large codes:

The error message:

[mpiexec@compute-8-15.local] HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:70): assert (!(pollfds.revents & ~POLLIN & ~POLLOUT & ~POLLHUP)) failed
[mpiexec@compute-8-15.local] HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:511): error waiting for event
[mpiexec@compute-8-15.local] main (./ui/mpich/mpiexec.c:548): process manager error waiting for completion

The F90 code is  a 51-lines .f90 ring passing 1 integer from node 0->1->... (attached below), the test script is trivial

# /bin/csh
#$ -cwd -j y
#$ -o
#$ -pe orte_ib 72
#$ -q sTNi.q
echo + `date +'%Y.%m.%d %T'` started on host `hostname` in queue $QUEUE with jobid=$JOB_ID
echo using $NSLOTS slots on:
echo cwd=`pwd`

# ORTE implementation need this
setenv  OMPI_MCA_plm_rsh_disable_qrsh 1

# use Intel's mpirun
set MPICH = /software/intel/impi/

# tell Intel's mpi implementation to use IB
setenv I_MPI_FABRICS "shm:ofa"

# now run ring0f
$MPICH/bin/mpirun -np $NSLOTS ./ring0f
echo = `date +'%Y.%m.%d %T'` done.

and the test works half the time. Successful output:

Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
+ 2014.09.18 13:02:54 started on host compute-7-15.local in queue sTNi.q with jobid=1928756
using 72 slots on:
compute-7-15.local 3 sTNi.q@compute-7-15.local UNDEFINED
compute-7-23.local 11 sTNi.q@compute-7-23.local UNDEFINED
compute-7-16.local 9 sTNi.q@compute-7-16.local UNDEFINED
compute-9-4.local 18 sTNi.q@compute-9-4.local UNDEFINED
compute-7-7.local 3 sTNi.q@compute-7-7.local UNDEFINED
compute-8-4.local 10 sTNi.q@compute-8-4.local UNDEFINED
compute-8-20.local 2 sTNi.q@compute-8-20.local UNDEFINED
compute-8-14.local 2 sTNi.q@compute-8-14.local UNDEFINED
compute-10-4.local 14 sTNi.q@compute-10-4.local UNDEFINED
 Process            0  got          456  at pass           1
 Process            1  got          456  at pass           1
 Process            2  got          456  at pass           1

Process           71  got          456  at pass           1
= 2014.09.18 13:02:58 done.

I repeat the test and  I get:

Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
+ 2014.09.18 16:23:24 started on host compute-8-15.local in queue sTNi.q with jobid=1939073
using 72 slots on:
compute-8-15.local 8 sTNi.q@compute-8-15.local UNDEFINED
compute-10-0.local 48 sTNi.q@compute-10-0.local UNDEFINED
compute-8-2.local 1 sTNi.q@compute-8-2.local UNDEFINED
compute-8-6.local 8 sTNi.q@compute-8-6.local UNDEFINED
compute-7-8.local 4 sTNi.q@compute-7-8.local UNDEFINED
compute-8-17.local 1 sTNi.q@compute-8-17.local UNDEFINED
compute-9-2.local 2 sTNi.q@compute-9-2.local UNDEFINED
[mpiexec@compute-8-15.local] HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:70): assert (!(pollfds.revents & ~POLLIN & ~POLLOUT & ~POLLHUP)) failed
[mpiexec@compute-8-15.local] HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:511): error waiting for event
[mpiexec@compute-8-15.local] main (./ui/mpich/mpiexec.c:548): process manager error waiting for completion
= 2014.09.18 16:23:25 done.

Any idea what causes this, or how to track the problem?

Sylvain Korzennik - HPC analyst, SI/HPC

(Smithsonian Institution High performance Computer)

PS: % uname -a
Linux 2.6.18-238.19.1.el5 #1 SMP Fri Jul 15 07:31:24 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux 

     % ifort -V
Intel(R) Fortran Intel(R) 64 Compiler XE for applications running on Intel(R) 64, Version Build 20110112
Copyright (C) 1985-2011 Intel Corporation.  All rights reserved.

program ring0f
  !  Ring.c-> MPI example from
  !  modified by SGK Sep 2014 to f90
  !  Write a program that takes data from process zero (0 to quit) and sends it
  !  to all of the other processes by sending it in a ring. That is, process i
  !  should receive the data and send it to process i+1, until the last process
  !  is reached.  Assume that the data consists of a single integer. Process zer
  !  reads the data from the user.
  include 'mpif.h'
  integer nPass, iVal
  integer iErr, iStatus, iRank, iSize, iDest, iFrom, nCount
  dimension iVal(10)
  dimension iStatus(MPI_STATUS_SIZE)
  integer mpiComm, msgTag
  data nPass/1/, iVal/1234,9*0/
  mpiComm = MPI_COMM_WORLD
  msgTag  = 0
  call MPI_INIT(iErr)
  call MPI_COMM_RANK(mpiComm, iRank, iErr)
  call MPI_COMM_SIZE(mpiComm, iSize, iErr)
  !print *, 'i>rank ', iRank, ' size ', iSize
  iDest = iRank+1
  iFrom = iRank-1
  nCount = 1
  do iPass = 1, nPass
     if (iRank.eq.0) then
        iVal(1) = 456
        !print *, iRank, '->', iDest
        call MPI_SEND(iVal, nCount, MPI_INTEGER, iDest, msgTag, mpiComm, iErr)
        !print *, iRank, '<-', iFrom
        call MPI_RECV(iVal, nCount, MPI_INTEGER, iFrom, msgTag, mpiComm, iStatus
, iErr)
        if ( then
        !print *, iRank, '->', iDest
           call MPI_SEND(iVal, nCount, MPI_INTEGER, iDest, msgTag, mpiComm, iErr
        end if
     end if
     print *, 'Process ', iRank, ' got ', iVal(1), ' at pass', iPass
  end do
  !print *, 'e>rank ', iRank, ' size ', iSize
  call MPI_FINALIZE(iErr)
end program ring0f


# Makefile for demos
# <- Last updated: Mon May 24 13:09:30 2010 -> SGK
# ---------------------------------------------------------------------------
# mpi location
MPDIR = /software/intel/impi/
MPINC = $(MPDIR)/include64
MPLIB = $(MPDIR)/lib64
MPBIN = $(MPDIR)/bin64
# flags
# compiler/linker
CC     = icc  $(CFLAGS) $(MFLAGS) $(IFLAGS)
MPICC  = $(MPBIN)/mpicc -cc=icc $(CFLAGS)
F90    = ifort  $(CFLAGS) $(MFLAGS) $(IFLAGS)
MPIF90 = $(MPBIN)/mpif90 -f90=ifort $(FFLAGS)
%.o: %.f90
        $(F90) -c $<
# ---------------------------------------------------------------------------
all: ring0 ring0f
ring0: ring0.o
        $(MPICC) -o $@ ring0.o
ring0f: ring0f.o
        $(MPIF90) -o $@ ring0f.o
# ---------------------------------------------------------------------------
        -rm *.o ring0 ring0f
        -rm *.log


0 Kudos
7 Replies

Can you get output with



from one of the failing runs?  This can generate a very large amount of output, please put it into a file and attach it rather than pasting it into the reply.


see two outputs attached:  *-success.log and *-failed.log.




Does this also occur on the current version of the Intel® MPI Library, 5.0 Update 1?


We have upgraded to the latest Intel compiler (Version Build 20140120), and I could not reproduce that error.

This being said (a) that version is warning that it is not compatible w/ CentOS 5.x (we run 2.6.18-238.19.1.el5) so we may run into other gotchas (until we upgrade the cluster on a month or two), so it would have been nice to know what caused the problem. (b) There is a bug in the distro of that version:

the line

​   if ( "$1" == "ia32_intel64" ) then

should be

   if ( "$1" == "intel64" ) then

in the file $PROD_DIR/ipp/bin/ippvars.csh​​ called from the setup file $INTEL_SW/bin/compilervars.csh - the value of $1 is either 'ia32' or 'intel64', so this comparison is never satisfied and the var arch is never set - where was you QC dept when you packaged that version? (INTEL_SW being where we installed the s/w)

Also, why in the script $INTEL_SW/impi/ you distribute it with the following default setup?

# Default settings for compiler, flags, and libraries
# Determined by a combination of environment variables and tests within
# configure (e.g., determining whehter -lsocket is needee)

and force users to add -fc=ifort - If I use Intel's 'mpif90', I would expect it to use Intel's compilers, not GNU's - the resulting error message, for users who have gfortran in their path, is mysterious at best... (fixing typos in comments would be a plus) - this reflects poorly on Intel's QC or lack there of. I have 25 years of experience in HPC, and it is nice to see that Intel maintains it tradition of lack of software expertise ...


The first question is specific to Intel® Performance Primitives, and should be posted in that forum ( ; Just keep in mind that 14.0.2 is not the latest compiler version.  The latest is, and the problem appears to be corrected in this version.  I may be wrong though, I don't use csh.  If you update and it doesn't appear to be working, check with the compiler team.

As for the MPI compiler scripts, we include two sets of compiler scripts.  mpicc, mpicxx, mpifc, mpigcc, mpigxx, mpif77, and mpif90 all default to the GNU* compilers, and you have the option (as you have found) to switch compilers.  This is to provide maximum compatibility with existing projects.  mpiicc, mpiicpc, and mpiifort all use the Intel® Compilers, and cannot be changed from this.  All of this can be found in the Intel® MPI Library Reference Manual ( for Linux* and for Windows*).


Hi James,

  Thanks for your answers, but (1) the latest Intel Cluster Studio release for Linux I have access to is 2013.SP1 - we have an up to date s/w  maintenance, so I have no idea where 15.x is and how to get it. (2) the latest versions are not CentOS 5.x compatible, according to the install package - hence my desire to know if you could figure out what was wrong w/ the original post. (3) 2013.SP1 distro has an built-in error, whether or not you use csh. (4) thanks for pointing out mpiifort and the alike, although this does not invalidate my previous comment - although it become a matter of philosophy.




I see that your support is current.  Send me a screenshot in a private message of what you see on Intel® Registration Center.

The latest versions are compatible with Red Hat*, and *should* work with CentOS*.  Full testing does not occur on CentOS*, so it is not listed as a supported distribution.  Any problems that arise, we'll do what we can to resolve them.  But if it isn't reproducible on a supported OS, it will only be best effort.

As for the original cause of the problem, did you only change the compiler version?  If so, I'm inclined to think it isn't related to MPI.  I've been proven wrong on this before though, so I won't say it is unrelated.

Looking at, it appears to set $arch correctly.

[plain]   if [[ "$1" != "ia32" && "$1" != "intel64" && "$1" != "ia32_intel64" ]]; then
       echo "ERROR: Unknown switch '$1'. Accepted values: ia32, intel64, ia32_intel64"
       exit 1;
   if [ "$arch" = "ia32_intel64" ]; then arch=intel64; fi[/plain]

The problem appears limited to ippvars.csh.

I've sent the typos to our developers.

If you have a compelling reason to switch the default compilers, please share it.