Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
198 Views

Printing core binding information

Hi all,

Ever since I move to parallel_studio_xe_2020_update2 I lose the capability of printing core-binding using Intel MPI.

I use to get this information using the following environmental variables:

mpiexec  -genv I_MPI_PIN_DOMAIN=core -genv I_MPI_PIN_ORDER=scatter -genv I_MPI_DEBUG=4 -n 4 ./myProgram

Now, nothing more than the program output is printed.

Any Idea of what is missing?

 

 

0 Kudos
11 Replies
Highlighted
Moderator
180 Views

Hi Edgar,


We have tried with IMPI 2019u8(which is the MPI version comes with PSXE update2) and found no issue.


Could you please check inside the command prompt and tell us if it is the same behaviour?

-> Set the environment by running compilvervars.bat, mpivars.bat.


For full info please refer to prerequisite steps in getting started guide(https://software.intel.com/content/www/us/en/develop/documentation/get-started-with-mpi-for-windows/...).


Regards

Prasanth



0 Kudos
Highlighted
Beginner
175 Views

Hi Prasanth, thank you for your answer.

I recently update my system. I am currently running Linux Mint 20:

NAME="Linux Mint"
VERSION="20 (Ulyana)"
ID=linuxmint
ID_LIKE=ubuntu
PRETTY_NAME="Linux Mint 20"

Previously, I was  running Linux Mint 19.3

I also update to parallel_studio_xe_2020_update2:

$ icc --version
icc (ICC) 19.1.2.254 20200623
Copyright (C) 1985-2020 Intel Corporation. All rights reserved.

$ mpiexec --version
Intel(R) MPI Library for Linux* OS, Version 2019 Update 8 Build 20200624 (id: 4f16ad915)
Copyright 2003-2020, Intel Corporation.

I am running my program using the command prompt. My command is like this:

mpiexec -genv I_MPI_PIN_DOMAIN=core -genv I_MPI_PIN_ORDER=scatter -genv I_MPI_DEBUG=4  -n 4 myProgram

I have also tried exporting the environmental variables (e.g.: export I_MPI_PIN_DOMAIN=core) without luck.

Setting the I_MPI_DEBUG=4 variable used to do the magic for me, but not anymore.

I am having a similar problem using gcc + mpich. In updating the OS I move form gcc 7.5 to gcc 9.3 and from mpich.hydra 3.3a2  to mpich.hydra 3.3.2. In using gcc+mpich I also lost the capability to print out the core binding information. I do not know if these two problems are related, but they both started showing just after I updated my system.

 

Thanks again,

Edgar

0 Kudos
Highlighted
Beginner
170 Views

I just re-installed parallel_studio_xe_2019_update5 and recompied my program.

It worked as expected. The core bindings are printed out.

Running programs compiled using parallel_studio_xe_2020_update2 under parallel_studio_xe_2019_update5 do not produced the core bindings print out. 

Running programs compiled using parallel_studio_xe_2019_update5 under parallel_studio_xe_2020_update2  do indeed produced the core bindings print out. 

Therefore, in my system, parallel_studio_xe_2020_update2 seems to be the one causing the problem.

Any ideas or sugestions?

Thanks,

Edgar

 

 

 

0 Kudos
Highlighted
Moderator
136 Views

Hi Edgar,


Currently, there is no support for Linux Mint OS. Please check System requirements of IMPI(https://software.intel.com/content/www/us/en/develop/articles/intel-mpi-library-release-notes-linux....).


If you have any other systems with the supported OS you can check with PSXE20 update2.


Could you once try the following suggestions to see if it works :

-->Instead of giving as a command-line argument could you once check by exporting I_MPI_DEBUG variable. Also, check by keeping the highest debug level like -

 export I_MPI_DEBUG=100


Since your OS is not supported if you want to check the core binding information, We provide an online Pinning Simulator for Intel® MPI Library(https://software.intel.com/content/www/us/en/develop/articles/pinning-simulator-for-intel-mpi-librar...)

Where you can play with the variables and see how the process pinning works in IMPI.


Thanks and Regards

Prasanth 



0 Kudos
Highlighted
Beginner
131 Views

Hi Prasanth,

Unfortunately, I do not have another system in which I can test this issue.

Exporting  (I_MPI_DEBUG=100), do not solve the problem.

As I mentioned before, I am having a similar problem using mpich after updating from (gcc 7.5+mpich 3.3a2)  to (gcc 9.3 + 3.3.2).

In this link: https://github.com/pmodels/mpich/issues/3361 they discuss the problem. Someone found the problem in " two different generic Linux x86-64 machines using (mpich) 3.4a2 and 3.3." The issue was flagged as a bug to be solved.

If the Intel MPI is based or was updated to one of these versions of mpich, perhaps that is the source of the problem.

 

Thank you, 

Edgar

0 Kudos
Highlighted
Moderator
103 Views

Hi Edgar,


Apologies for the delay!

We are looking into your query with the internal engineering team. We will get back to you soon.


Thanks & Regards

Prasanth


0 Kudos
Highlighted
Beginner
96 Views

Thank you,  Prasanth.

To me, the problem seems to be related to mpich 3.3.2.

I downloaded and compile the new 3.4a3 (alpha release) and the problem is gone.

If the latest Intel MPI library is somehow based on mpich 3.3.2, this could be causing the problem.

Thanks again,

Edgar Black

 

0 Kudos
Highlighted
Moderator
90 Views

Hi Edgar,

 

This issue was not seen in supported operating systems as we have checked.

So we think this might not be a product issue and is something related to the environment of your system.

Since your OS is unsupported we cannot comment on this.

 

If your query is answered please confirm.

 

Regards

Prasanth

 

 

0 Kudos
Highlighted
Moderator
71 Views

Hi Edgar,


Since your question has been answered, can we close this thread?

You can reach out to us if you have any further questions.


Regards

Prasanth


0 Kudos
Highlighted
Beginner
67 Views

Hi Prasanth,

Yes. It can be close.

Thanks,

Edgar

0 Kudos
Highlighted
Moderator
44 Views

Hi Edgar,


Thanks for the confirmation.

This issue has been resolved and we will no longer respond to this thread. If you require additional assistance from Intel, please start a new thread. Any further interaction in this thread will be considered community only


Regards

Prasanth


0 Kudos