Intel® oneAPI HPC Toolkit
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
The Intel sign-in experience has changed to support enhanced security controls. If you sign in, click here for more information.

INTEL-MPI-5.0: -prepend-rank on the mpirun command line does not work


Dear developers of Intel-MPI,

I found, that the helpful option   -prepend-rank   does not work when launching  a parallelized Ftn-code with mpirun when using INTEL MPI-5.0 :

       mpirun -binding -prepend-rank -ordered-output -np 4 ./a.out

The option actually has no effect with INTEL MPI-5.0 (with INTEL MPI-4.1 it worked). No rank-numbers are prepended on the display to the output lines of the program.

By the way: Can anyone tell me what the other option  -ordered-output  actually does? I included it, hoping for a less confuse output from the different ranks on the display, but I have never seen an effect. The   mpirun –help    info is too short. Are there somewhere more detailed explanations of the options?

Can you fix that –prepend-rank bug soon? It is a really valuable option for debugging.


 Michael R.




0 Kudos
5 Replies

The -ordered-output option keeps output from different ranks from mixing.  According to the Reference Manual:

Use this option to avoid intermingling of data output from the MPI processes. This option affects both the standard output and the standard error streams.

When using this option, end the last output line of each process with the end-of-line (\n) character. Otherwise the application may stop responding.

As for -prepend-rank, try using -l (that's a lowercase L).  I believe that will give you the expected result.


Dear developers of INTEL-MPI,

There seems to be a parser problem with the mpirun-command-line of INTEL-MPI 5.0:

As already observed by me (see ),
the command line    mpirun -np 4 -bindings -prepend-rank ./a.out
made the executable run, but without prepending the rank numbers to the std output.

So I played around with the command line and found, that the cause of the problem seems to be the  -binding  option:

  The parser ignores any further options placed after that option.

   If the  -binding  option is the very last option before the executable, then the executable will not be found.

For example:

This works correctly:   mpirun -np 4 -prepend-rank -v ./a.out

Whereas this does not work:
  d0000000 cl3fr1 292$mpirun  -np 4 -prepend-rank  -binding  ./a.out
  [mpiexec@cl3fr1] set_default_values (../../ui/mpich/utils.c:3178): no executable provided
  [mpiexec@cl3fr1] HYD_uii_mpx_get_parameters (../../ui/mpich/utils.c:3620): setting default values failed
  [mpiexec@cl3fr1] main (../../ui/mpich/mpiexec.c:438): error parsing parameters
  d0000000 cl3fr1 293$

and this runs the executable, but the nonsense-option (or any valid option, like  -v ) following the -binding option is ignored:
                        mpirun  -np 4 -prepend-rank  -binding -mynonsense ./a.out

By the way:  Is the option  -bindings  just the default?

Is it a bug or do I make something wrong?



The -binding option expect an argument.  Please see the Intel® MPI Library Reference Manual for a full list of possible arguments.  Go to and search for the section Binding Options.


Dear Mr. Tullos,

Thank You for your fast answer from 2014-09-02.

From my experience with the   –binding  option, I suggest that the parser of the mpirun command-line should issue an error message, if the argument following the –binding keyword is invalid. At present he accepts silently everything (and it remains its dark secret, what the –binding option causes in that case).

That caused my problem.

And the explanation of the arguments of the  –binding  option in the INTEL MPI Reference Manual  are incomplete:

It is not clear, what is the default, if some of the possible parameters are missing, e.g.:

   What happens if          -binding               is missing at all?  Does that mean, no binding of the MPI-processes at all?   

   What happens for       -binding “cell=core”    ,i.e. what choice of   map=…  will be used by default in that case? Is this the suited choice to switch off a possible hyperthreading?


Greetings to you and your team!

  Michael R.


I'll pass your feedback to our developers.

If you don't use -binding, there are default behaviors, and these are dependent on the detected system architecture.  You can also use the I_MPI_PIN_PROCESSOR_LIST environment variable to define the behavior.  The easiest way to see the resulting pinning is to set I_MPI_DEBUG=4 and run your job.  This will show you where the ranks are pinned.