Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2159 Discussions

Error when compiling with -tcollect

Kevin_McGrattan
1,028 Views

I am trying to trace a large Fortran code that uses MPI. In the past, I have added -tcollect to the list of compiler options (mpiifort). This seems to be causing trouble now

/opt/intel/oneapi/itac/2021.7.1/lib/libdwarf.a(libdwarf_la-dwarf_init_finish.o): In function `do_decompress_zlib':
/localdisk3/tomasrod/dwarf/libdwarf-20190110/libdwarf/dwarf_init_finish.c:1644: undefined reference to `uncompress'

Can anyone tell me what the current procedure is for collecting and tracing? I have read the various "Getting Started" guides, but I seem to be getting stale information. What is the most up to date description of tracing a Fortran code. I am still using the classic compiler, 2021.7.1

0 Kudos
1 Solution
ShivaniK_Intel
Moderator
955 Views

Hi,


Thanks for posting in the Intel forums.


For a workaround, you can add "-lz" to the compiler options. If you further face any issues please let us know.


If this resolves your issue please accept it as a solution.


Thanks & Regards

Shivani



View solution in original post

0 Kudos
5 Replies
Barbara_P_Intel
Moderator
1,013 Views

I'm moving this question to the HPC Toolkit Forum. That's the best place for MPI questions.

 

0 Kudos
ShivaniK_Intel
Moderator
956 Views

Hi,


Thanks for posting in the Intel forums.


For a workaround, you can add "-lz" to the compiler options. If you further face any issues please let us know.


If this resolves your issue please accept it as a solution.


Thanks & Regards

Shivani



0 Kudos
Kevin_McGrattan
947 Views

The -lz option did not help. Let me tell you what I am doing:

 

I am using ifort version 2021.7.1

 

I am compiling with these options

 

-tcollect -lz -m64 -fc=ifort -O2 -ipo -no-wrap-margin -DUSE_IFPORT

 

I am setting these environment variables

 

export OMP_NUM_THREADS=1
export I_MPI_DEBUG=5
export LD_PRELOAD=/opt/intel/oneapi/itac/latest/slib/libVT.so
export VT_LOGFILE_FORMAT=stfsingle
export VT_PCTRACE=5
export VT_CONFIG=<path to conf file>/fds_trace.conf

 

Standard error gives me

 

srun: error: blaze025: tasks 1,3,5-6: Segmentation fault
srun: Terminating job step 5073590.0
srun: error: blaze025: task 2: Segmentation fault (core dumped)
slurmstepd: error: *** STEP 5073590.0 ON blaze025 CANCELLED AT 2023-01-30T09:10:49 ***
srun: error: blaze025: task 7: Segmentation fault (core dumped)
srun: error: blaze025: task 4: Segmentation fault (core dumped)
srun: error: blaze025: task 0: Segmentation fault (core dumped)

 

The log file is

 

Mon Jan 30 09:10:23 EST 2023
Input file: strong_scaling_test_008.fds
Directory: /home/mcgratta/firemodels/fds_central/Validation/MPI_Scaling_Tests/Test
Host: blaze025
MPIR_pmi_virtualization(): MPI startup(): PMI calls are forwarded to /usr/lib64/libpmi.so
MPIR_pmi_virtualization(): MPI startup(): PMI calls are forwarded to /usr/lib64/libpmi.so
MPIR_pmi_virtualization(): MPI startup(): PMI calls are forwarded to /usr/lib64/libpmi.so
MPIR_pmi_virtualization(): MPI startup(): PMI calls are forwarded to /usr/lib64/libpmi.so
MPIR_pmi_virtualization(): MPI startup(): PMI calls are forwarded to /usr/lib64/libpmi.so
MPIR_pmi_virtualization(): MPI startup(): PMI calls are forwarded to /usr/lib64/libpmi.so
[0] MPI startup(): Intel(R) MPI Library, Version 2021.7 Build 20221022 (id: f7b29a2495)
MPIR_pmi_virtualization(): MPI startup(): PMI calls are forwarded to /usr/lib64/libpmi.so
[0] MPI startup(): Copyright (C) 2003-2022 Intel Corporation. All rights reserved.
[0] MPI startup(): library kind: release
MPIR_pmi_virtualization(): MPI startup(): PMI calls are forwarded to /usr/lib64/libpmi.so
[0] MPI startup(): libfabric version: 1.13.2rc1-impi
[0] MPI startup(): libfabric provider: psm3
[0] MPI startup(): File "/opt/intel/oneapi/mpi/2021.7.1/etc/tuning_skx_shm-ofi_psm3_20.dat" not found
[0] MPI startup(): Load tuning file: "/opt/intel/oneapi/mpi/2021.7.1/etc/tuning_skx_shm-ofi_psm3.dat"
[0] MPI startup(): Rank Pid Node name Pin cpu
[0] MPI startup(): 0 9909 blaze025 0,1,2,3,4,5,6,7
[0] MPI startup(): 1 9910 blaze025 0,1,2,3,4,5,6,7
[0] MPI startup(): 2 9911 blaze025 0,1,2,3,4,5,6,7
[0] MPI startup(): 3 9912 blaze025 0,1,2,3,4,5,6,7
[0] MPI startup(): 4 9913 blaze025 0,1,2,3,4,5,6,7
[0] MPI startup(): 5 9914 blaze025 0,1,2,3,4,5,6,7
[0] MPI startup(): 6 9915 blaze025 0,1,2,3,4,5,6,7
[0] MPI startup(): 7 9916 blaze025 0,1,2,3,4,5,6,7
[0] MPI startup(): I_MPI_ROOT=/opt/intel/oneapi/mpi/2021.7.1
[0] MPI startup(): I_MPI_DEBUG=5
[0] MPI startup(): I_MPI_PMI_LIBRARY=/usr/lib64/libpmi.so

 

 

0 Kudos
Kevin_McGrattan
918 Views

Never mind. I just learned that CPU I am using on our linux cluster is over 10 years old and no longer supports vtune. Thanks.

0 Kudos
ShivaniK_Intel
Moderator
905 Views

Hi,


Glad to know that your issue is resolved. If you need any additional information, please post a new question as this thread will no longer be monitored by Intel.


Thanks & Regards

Shivani


0 Kudos
Reply