Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.

ITAC: no trace file

AAK
Novice
2,675 Views

Hi,

I am trying to get ITAC running, but there is no trace / .stf file. I am compiling with the flags

mpiifort -g -O0 -fpp -trace -tcollect

and running mpi with

LD_PRELOAD=libVT.so mpirun -trace -np 8 path-to-exe

The LD_PRELOAD is coming from this post, but does not make any difference.

I source the itacvars.sh, mpivars.sh and set the VT_CONFIG variable to the usual config file.

Versions are

mpiifort --version 
ifort (IFORT) 19.0.4.243 20190416
Copyright (C) 1985-2019 Intel Corporation.  All rights reserved.

mpirun --version 
Intel(R) MPI Library for Linux* OS, Version 2019 Update 4 Build 20190430 (id: cbdd16069)
Copyright 2003-2019, Intel Corporation.

The program executes, MPI_INIT and MPI_FINALIZE are called and the program terminates without errors.


Where could I start to debug this situation?

Labels (3)
0 Kudos
9 Replies
AAK
Novice
2,668 Views

Now I have a clue of what is going on:

ITAC doesn't support the use of mpi_f08 in the code for some reason. When I change to pmpi_f08 ITAC runs normally. But the use of pmpi_f08 is no option since it produces several runtime errors (without error warning even though compiled with debug flags).

This seems related to this topic.

Any suggestions how to get ITAC running together with mpi_f08 ?

0 Kudos
PrasanthD_intel
Moderator
2,654 Views

Hi Alexander,

 

ITAC supports Fortran 2008 standard as mention in the release notes (https://software.intel.com/content/www/us/en/develop/articles/intel-trace-analyzer-and-collector-release-notes-linux.html) but regarding the support for mpi_f08, we will get back to you.

Also, were you able to generate .stf file while using "use mpi_f08"? or are you getting segmentation fault while using -trace option?

Can you provide us with any reproducer code?

 

Thanks

Prasanth

0 Kudos
AAK
Novice
2,641 Views

A simple test case is the following program

program test
use mpi_f08
integer ierr, siz, rank
  call MPI_INIT(ierr)
  call MPI_COMM_SIZE(MPI_COMM_WORLD,siz,ierr)
  call MPI_COMM_RANK(MPI_COMM_WORLD,rank,ierr)
  write(*,*) "hello world"
  call MPI_FINALIZE(ierr)
end program test

Compiling and running this with 

 

mpiifort -g -trace -o test test.f90
mpirun -n 2 -trace ./test

 

does not generate a .stf file. It seems that ITAC doesn't do anything. There is no ITAC output to the command line (usually I'd expect something like "[0] Intel(R) Trace Collector INFO: Writing tracefile ... .stf in ...").

However, when changing mpi_f08 to pmpi_f08 in the file test.f90 above and running the exact same  commands, ITAC does what it's supposed to do. There is a .stf file and output on the command line.

My much more complicated code with several MPI calls terminates badly when using pmpi_f08 instead of mpi_f08. Yet I am not willing to reduce this larger code to a minimal test case, since that would be a lot of work.

 

So to answer the questions:

Also, were you able to generate .stf file while using "use mpi_f08"?:

No I wasn't. There is no sign of ITAC even running.

are you getting segmentation fault while using -trace option?

When I compile my larger code (which uses mpi_f08) either normally (without -trace) or with -g -trace and use -trace on the mpirun command there is no segfault, but also no .stf file.

When I change my larger code to use pmpi_f08 and compile with -g -trace, there is a segfault after some runtime.

0 Kudos
PrasanthD_intel
Moderator
2,622 Views

Hi Alexander,


Looks like the ITAC is loaded incorrectly.

We have tested with your sample code at our side and launched tranceanalyzer without any errors.

Which version of ITAC were you using? You can get the version by

which traceanalyzer.

Are you using OneAPI toolkit or Parallel Studio Cluster?

Can you upgrade to the latest version (2021.9) and check?


Regards

Prasanth


0 Kudos
AAK
Novice
2,619 Views

Hi  Prasanth,

tranceanalyzer is located at

/opt.net/intel/parallel_studio_xe_2019_update4_cluster_edition/itac/2019.4.036/intel64/bin/traceanalyzer

Unfortunately I cannot upgrade to the latest version.

As I already said, when using mpi instead of mpi_f08 ITAC runs perfectly fine. So if there is some issue in the initialization it is non-trivial?!

0 Kudos
PrasanthD_intel
Moderator
2,585 Views

Hi Alexander,

 

You were saying that only when you use mpi_f08 no .stf were being generated.

This looks like an issue. We are transferring this thread to internal team for better support.

Can you please provide your environment details (OS version, cpuinfo) which will be helpful for our team.

 

Regards

Prasanth

0 Kudos
AAK
Novice
2,580 Views

Hey Prasanth,

thanks for the reply. Here is the output

> cpuinfo 
Intel(R) processor family information utility, Version 2019 Update 4 Build 20190430 (id: cbdd16069)
Copyright (C) 2005-2019 Intel Corporation.  All rights reserved.

=====  Processor composition  =====
Processor name    : Intel(R) Core(TM) i7-3770  
Packages(sockets) : 1
Cores             : 4
Processors(CPUs)  : 8
Cores per package : 4
Threads per core  : 2

=====  Processor identification  =====
Processor       Thread Id.      Core Id.        Package Id.
0               0               0               0   
1               0               1               0   
2               0               2               0   
3               0               3               0   
4               1               0               0   
5               1               1               0   
6               1               2               0   
7               1               3               0   
=====  Placement on packages  =====
Package Id.     Core Id.        Processors
0               0,1,2,3         (0,4)(1,5)(2,6)(3,7)

=====  Cache sharing  =====
Cache   Size            Processors
L1      32  KB          (0,4)(1,5)(2,6)(3,7)
L2      256 KB          (0,4)(1,5)(2,6)(3,7)
L3      8   MB          (0,1,2,3,4,5,6,7)

> uname -a
Linux btpcx21 4.12.14-lp151.28.59-default #1 SMP Wed Aug 5 10:58:34 UTC 2020 (337e42e) x86_64 x86_64 x86_64 GNU/Linux

 

  Do you need any further info?

Best Regards
Alexander

0 Kudos
Klaus-Dieter_O_Intel
2,551 Views

This was a known issue which was fixed in 2019 Update 5.

 

You wrote that you cannot upgrade. I assume because you do not have root access? This would not prevent upgrading because you could do a local installation. A non-root installation will be installed by default into $HOME/intel/. Please let me know if you need additional support.

 

Please see https://software.intel.com/content/www/us/en/develop/articles/intel-parallel-studio-xe-supported-and-unsupported-product-versions.html for latest supported versions. Which should provide the most fixed bugs.

 

 

0 Kudos
AAK
Novice
2,512 Views

Hello Klaus-Dieter,

I tried out the newer versions of the Intel parallel studio cluster edition 2019_update5 and 2020_update2, but the above example still does not produce a trace file.

When I compile and run the toy program mentioned above with `use mpi_f08` I get

> /opt.net/intel/parallel_studio_xe_2020_update2_cluster_edition/compilers_and_libraries_2020.2.254/linux/mpi/intel64/bin/mpiifort -g -trace -o test test.f90
> /opt.net/intel/parallel_studio_xe_2020_update2_cluster_edition/compilers_and_libraries_2020.2.254/linux/mpi/intel64/bin/mpirun -n 2 -trace ./test
 hello world
 hello world

 on the command line, but no trace files. When I run with `use pmpi_f08` I get the command line output

> /opt.net/intel/parallel_studio_xe_2020_update2_cluster_edition/compilers_and_libraries_2020.2.254/linux/mpi/intel64/bin/mpiifort -g -trace -o test test.f90
> /opt.net/intel/parallel_studio_xe_2020_update2_cluster_edition/compilers_and_libraries_2020.2.254/linux/mpi/intel64/bin/mpirun -n 2 -trace ./test
 hello world
 hello world
[0] Intel(R) Trace Collector INFO: Writing tracefile test.itac.stf in ...
[0] Intel(R) Trace Collector INFO: Writing tracefile test.itac.stf in ...

and perfectly readable trace files, but this is still no option since the pmpi_f08 library fails when used in my actual code.

So it looks like the bugs still exist at least within my individual setup.

Best regard
Alexander

 

 

0 Kudos
Reply