Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
16 Views

Error while running mpitune utility

Hi, we have 8 nodes cluster with each node having xeon 8cores. all the clusters are connected using mellanox infiniband. I have installed intel cluster studio xe 2015. i have been using mpiicc and mpirun for a while. Now i wanted to use mpitune utility using hostfile hosts

node1

node1

node2

node2

node3

node3

i tried to call mpitune from my current directory by

mpitune -hf hosts -odr /home/srinivasan/

mpitune stops by giving this output

25'Nov'15 22:11:38 INF | Starting. Please wait...
25'Nov'15 22:11:38     | MPITune started at  25 November'15 (Wednesday) 16:41:38
25'Nov'15 22:11:38     | MPITune has been started by: srinivasan
25'Nov'15 22:11:38     | Preparing tuner's components...
25'Nov'15 22:11:38     | Initialization of signals handlers...
25'Nov'15 22:11:38     | Start catching signal with code 15 (SIGTERM) ...
25'Nov'15 22:11:38     | Success.
25'Nov'15 22:11:38     | Start catching signal with code 2 (SIGINT) ...
25'Nov'15 22:11:38     | Success.
25'Nov'15 22:11:38     | Initialization of signals handlers completed.
25'Nov'15 22:11:38 CER | Can not continue due to error while detecting Intel MPI Library.
25'Nov'15 22:11:38 CER | A critical error has occurred!
Details:
--------------------------------------------------------------------------------
Type  : <type 'exceptions.Exception'>
Value : Can not continue due to error while detecting Intel MPI Library.
--------------------------------------------------------------------------------
25'Nov'15 22:11:38     | Time of work automatic tuning utility is 0h:0m:0s:37ms
25'Nov'15 22:11:38 CER | Error while terminating child processes. Description: 'NoneType' object has no attribute 'DestroyAllChildProcesses'
25'Nov'15 22:11:38 INF | Safe application's termination completed.
25'Nov'15 22:11:38     | Time of work automatic tuning utility is 0h:0m:0s:37ms

 

I dont know how to rectify this. any suggestions??

0 Kudos
3 Replies
Highlighted
Employee
16 Views

Hello,

The Intel MPI library that you are using is quite old, please update to the most recent IMPI 5.1.2 library that you can find on the registrationcenter website.

In your hostfile, you don't need to repeat the host names, the number of cores per node can be provided to mpitune via the -pr parameter.

Please try the following and provide us the output.:
$ cat hosts | uniq > hosts_new
$ mpiicc -o test.x $I_MPI_ROOT/test/test.c
$ mpirun -hostfile ./hosts_new ./test.x
$ mpitune -hf ./hosts_new

Best regards,
Michael

0 Kudos
Highlighted
Beginner
16 Views

Hi micheal,
 Yes i do know that the library is a bit old but our cluster server here have RHEL 5.4 installed. Only  intel MPI Library 4.1.3.048 is compatable with this. As we dont have any sytem admin here, disturbing the configuration might bcome a problem for me. 

Anyways as per the commands you gave, the outputs are as follows (all were executed from root account)

$ cat hosts | uniq > hosts_new
cat: hosts: No such file or directory


$mpiicc -o test.x test.c
warning #13003: message verification failed for: 28003; reverting to internal message
warning #13003: message verification failed for: 28008; reverting to internal message
/tmp/iccgBuRfW.o: In function `main':
test.c:(.text+0x2e): undefined reference to `__intel_new_feature_proc_init'

After executing the above command the test.c file is not there in the location. its missing. before execution it was there in that location. Now its missing. dont know wt happened.

$ ls (Before execution)
test.c test.cpp test.f test.f90


$ ls (After executing mpiicc -o test.x test.c )
test.cpp test.f test.f90


In which directory should i run the below commands???
$ mpirun -hostfile ./hosts_new ./test.x
$ mpitune -hf ./hosts_new

Regards 
G SRINIVASAN

0 Kudos
Highlighted
Employee
16 Views

Since the same issue is being discussed in form of an Intel Premier Support (IPS) ticket, I will work on the IPS ticket and update this thread as soon as it is resolved.

 

Best regards,

Michael

0 Kudos