Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.

ITAC Lustre integration

dkokron
Beginner
410 Views

I am using itac-9.0.3.051 on a Linux cluster to profile rank 0 and rank 15 of a small MPI application.  The code was compiled with the Intel compiler suite (2015.3.187) and impi-5.0.3.048.  The compile flags include "-g -tcollect" and I started the MPI with "mpirun -trace".  The application took a VERY long time to complete, but it did finish.  Now ITAC is writing the trace data (~2328778.00 MB) to our Lustre file system.  The problem is that I'm only getting about 50MB/s to this file system which is capable of much higher write speeds..  Does ITAC have any internal awareness of Lustre like Intel MPI does?  What ITAC settings would you suggest I use to achieve optimal writes to Lustre?

Here are my ITAC settings

setenv VT_CONFIG ${PBS_O_WORKDIR}/itac.conf
setenv VT_LOGFILE_PREFIX ${PBS_O_WORKDIR}/ITAC.${jobid}
setenv VT_FLUSH_PREFIX $VT_LOGFILE_PREFIX
setenv VT_MEM_BLOCKSIZE 2097152
setenv VT_MEM_FLUSHBLOCKS 32
setenv VT_MEM_MAXBLOCKS 512
mkdir $VT_LOGFILE_PREFIX
lfs setstripe -c 4 -s 2097152 -i -1 $VT_LOGFILE_PREFIX

Dan

0 Kudos
2 Replies
Dmitry_K_Intel2
Employee
410 Views

Hi Dan,

ITAC is not aware about File System. It uses regular files (FILE *) to store traces.
But, it seems to me that the issue is related to the algorithm of saving files  - only rank0 saves traces and it can be a bottleneck.

Regards!
---Dmitry

 

 

0 Kudos
dkokron
Beginner
410 Views

As both Intel MPI and Lustre are Intel products, please consider collaborating with those teams to add parallel I/O and Lustre awareness to ITAC.

0 Kudos
Reply