Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.

ITAC cli disable aggregation

mpiuser1
Beginner
1,381 Views

Hi Intel community,

We're trying to use ITAC to trace an MPI application. When the number of processes is over 1000, the resulting file from traceanalyzer --cli --messageprofile aggregates processes over 512 as "aggregated_below." Is there a way to disable this aggregation using the command line interface and have the message profile result for each process separately?

Thanks,

Erica

0 Kudos
8 Replies
JananiC_Intel
Moderator
1,365 Views

Hi,


Thanks for posting in Intel forums.


We are forwarding this case to HPC forum for a quicker response.



0 Kudos
GouthamK_Intel
Moderator
1,337 Views

Hi Erica,

The advanced aggregation groups contain hidden (non-visible separately) processes as two additionally aggregated groups:

  1. aggregated_above
  2. aggregated_below

The default frame size is 512 displayed processes. You can change this value in the Edit Configuration Dialog Box in GUI.


Please find the below link for more information on aggregation.

https://software.intel.com/content/www/us/en/develop/documentation/ita-user-and-reference-guide/top/intel-trace-analyzer-reference/concepts/advanced-aggregation.html


But, as you are working on cli we will get back to you on how to set the Frame size in cli mode after discussing with the concerned internal team.


Thanks & Regards

Goutham


0 Kudos
mpiuser1
Beginner
1,318 Views

Hi Goutham,

Thanks for the note! Yes, that would be great to know how to do it with cli. Is there an environmental variable for the frame size?

Thanks,

Erica

0 Kudos
GouthamK_Intel
Moderator
1,296 Views

Hi Erica,

Sorry for the delay!

We are talking with the concerned internal team regarding ITAC cli and will get back to you soon.

However, we are having a lightweight product "Application Performance Snapshot (APS)" alternative to ITAC for high-level details analysis.

We can see that you are interested in the message profile (rank to rank) analysis. So may consider APS for that.

To use APS follow the bellow steps:

  1. Set up the APS environment
    1. source /opt/intel/oneapi/setvars.sh intel64
  2. export APS_STAT_LEVEL=5
  3. compile the MPI Application.
    1. mpiicc <filename>
  4. Run the following command to collect data about your MPI application:
    1. <mpi launcher> <mpi parameters> aps <my app> [<app parameters>]
    2. eg: mpirun -n 100 aps ./reduce
  5. A folder will be created with aps_result_[date]
  6. Run aps command to generated the result (a HTML file will be created).
    1. aps --report=aps_result_<date>
  7. To generate specific reports like message profiling or collective profiling you can provide flags to aps-report command.
    1. aps-report ./aps_result_<postfix> -f # prints the function summary
    2. aps-report ./aps_result_<postfix> -x #provides information about each rank-to-rank communication (message profiling).


For more information please refer the below link.

https://software.intel.com/content/www/us/en/develop/documentation/application-snapshot-user-guide/top/detailed-mpi-analysis/analysis-charts/data-transfers-per-rank-to-rank-communication.html


Have a Good day!


Thanks & Regards

Goutham



0 Kudos
GouthamK_Intel
Moderator
1,282 Views

Hi Erica,

We have checked with the concerned internal team.

In the CLI mode, there is currently no way of switching off the aggregation. So we suggest you to use APS with steps mentioned in the earlier post.

Please let us know if you face any further challenges.


Thanks & Regards

Goutham


0 Kudos
GouthamK_Intel
Moderator
1,253 Views

Hi Erica,

Could you please let us know if your issue is resolved?

If yes, let us know whether we can close this thread from our side.


Regards

Goutham


0 Kudos
mpiuser1
Beginner
1,248 Views
0 Kudos
GouthamK_Intel
Moderator
1,236 Views

Hi, 

Thanks for the confirmation!

As this issue has been resolved, we will no longer respond to this thread. 

If you require any additional assistance from Intel, please start a new thread. 

Any further interaction in this thread will be considered community only. 

Have a Good day!


Thanks & Regards

Goutham


0 Kudos
Reply