I just upgrade Intel MPI for Windows from 3.0.012 to 4.0.0.011. After I upgrade, I can run parallel case in a single node without problem. If I run parallel case cross multi nodes, my program always stopped. I debug the running status, processes were started up at shm data transfer mode. If I set I_MPI_FABRICS to shm:tcp, program also stopped. If I set I_MPI_FABRICS to tcp, program can run. If I set I_MPI_FABRICS to dapl and set I_MPI_FALLBACK to enable, program can run. But that is not want I want. We are developing a commercial software, we want MPI can select fabrics automatically, our users may not know the detail to set those environment variables. The problem happed on both Windows XP 64bit and Windows 7 64bit version. Does anyone meet the same problem? Thanks,
"Third case, I ran without -genv I_MPI_FABRICS shm:tcp, and I set host name to the two different name, gem3 and gems4. The output is below.
mpiexec -wdir "mydir" -genv I_MPI_DEBUG 5 -hosts 2 gems3 3 gems4 2 -pwdfile "mypassword" "myfile"
 MPI startup(): shm data transfer mode
 MPI startup(): shm data transfer mode"