Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.

mpitune_fast

L__D__Marks
New Contributor II
1,404 Views

When running

mpitune_fast -ppn 64,32,16 -hosts node1,node2,node3 -d /projects/p20005/tuning

 

I get a failure (2021.1.1 version) with an output:

 

Detected SLURM host file
Unknown option: V
Usage:
mpiexec.slurm args executable pgmargs

where args are comannd line arguments for mpiexec (see below),
executable is the name of the eecutable and pgmargs are command line
arguments for the executable. For example the following command will run
the MPI program a.out on 4 processes:

mpiexec.slurm -n 4 a.out

mpiexec.slurm supports the following options:

[-n nprocs]
[-host hostname]
[-verbose]
[-nostdin]
[-allstdin]
[-nostdout]
[-pernode]
[-config config_file]
[-help|-?]
[-man]

[26357] Failed to execute script mpitune_fast
Traceback (most recent call last):
File "mpitune_fast.py", line 919, in <module>
File "mpitune_fast.py", line 798, in main
File "mpitune_fast.py", line 121, in get_args
File "mpitune_fast.py", line 183, in get_impi_build_date
File "subprocess.py", line 411, in check_output
File "subprocess.py", line 512, in run
subprocess.CalledProcessError: Command '['mpiexec', '-V']' returned non-zero exit status 2.
IMPI package: /projects/p20005/intel/oneapi/mpi/2021.1.1

 

Is this a known bug that is solved in more recent versions? I cannot find anything on the web, and I do not have access to mpitune_fast.py to look at the source coude.

Labels (1)
0 Kudos
4 Replies
SantoshY_Intel
Moderator
1,386 Views

Hi,

 

Thanks for posting in the Intel communities.

 

We tried from our end using the latest Intel MPI 2021.8 & we were able to run successfully as shown in the attachment(log.txt).

Please find the attachment(projects.zip) for the resultant files generated by running the below command:

mpitune_fast -ppn 64,32,16 -hosts <node1>,<node2>,<node3> -d projects/p20005/tuning

So, we recommend you use the latest Intel MPI & try again. Please get back to us if you face any issues.

 

Before trying, please go through the Intel MPI system requirements and check if your system meets all the system requirements.

 

Thanks & Regards,

Santosh

 

 

0 Kudos
L__D__Marks
New Contributor II
1,371 Views

Thanks. To give a complete answer, it appears that mpitune_fast 2021.1.1 is broken, whereas 2021.8.0 works (or at least runs through) with slurm.

 

An important question, since the docu is very sparse. Does mpitune_fast give node specific results, or general results. For instance, if run for host1,host2,host3 will the file produced (cluster_merged_2023-01-11_111645_shm-ofi.dat) also work when host4,host5,host6 are used, where all the nodes are identical with idential memory, interconnects etc? 

 

Or are the results node specific?

0 Kudos
SantoshY_Intel
Moderator
1,351 Views

Hi,


Glad to know that your issue is resolved.


>>"Or are the results node specific?"

Yes, the results are node-specific.


Since your issue is resolved, let us know if we can close this case.


Thanks & Regards,

Santosh




0 Kudos
SantoshY_Intel
Moderator
1,326 Views

Hi,


We assume that your issue is resolved. If you need any additional information, please post a new question as this thread will no longer be monitored by Intel.


Thanks,

Santosh


0 Kudos
Reply