Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.

Valgrind Support

domw
Beginner
873 Views
I have been trying to use Valgrind & v3.2 MPI and am getting a whole slew of what I believe to be spurious warnings deep inside the MPI libraries - see below. (Could still be me of course)

Is there any update to this posting for your colleague?


The OpenMPI team have obviously spent some time on Valgrind


Have you done anything similar (or plan to) or any recommedations (e.g. force transport to TCP for all nodes, some suppressions etc etc ...)

thanks

Dominic

==8143== Syscall param writev(vector[...]) points to uninitialised byte(s)
==8143== at 0x35871BFC57: writev (in /lib64/tls/libc-2.3.4.so)
==8143== by 0x6FE33BB: MPIDU_Sock_wait (in /ixscratch/domw/Abingdon/IX/Branch/Main/rh3_x86_64_gcc-3.4.3/RelWithDebInfo/libmpi.so.3.2)
==8143== by 0x6F2D7B3: MPIDI_CH3I_RDSSM_Progress (in /ixscratch/domw/Abingdon/IX/Branch/Main/rh3_x86_64_gcc-3.4.3/RelWithDebInfo/libmpi.so.3.2)
==8143== by 0x6F2C74F: MPIDI_CH3_Progress_test (in /ixscratch/domw/Abingdon/IX/Branch/Main/rh3_x86_64_gcc-3.4.3/RelWithDebInfo/libmpi.so.3.2)
==8143== by 0x6F27CA8: MPIDI_CH3_Init (in /ixscratch/domw/Abingdon/IX/Branch/Main/rh3_x86_64_gcc-3.4.3/RelWithDebInfo/libmpi.so.3.2)
==8143== by 0x6F861AB: MPIDD_Init (in /ixscratch/domw/Abingdon/IX/Branch/Main/rh3_x86_64_gcc-3.4.3/RelWithDebInfo/libmpi.so.3.2)
==8143== by 0x7005836: MPID_Init (in /ixscratch/domw/Abingdon/IX/Branch/Main/rh3_x86_64_gcc-3.4.3/RelWithDebInfo/libmpi.so.3.2)
==8143== by 0x6F7CB7E: MPIR_Init_thread (in /ixscratch/domw/Abingdon/IX/Branch/Main/rh3_x86_64_gcc-3.4.3/RelWithDebInfo/libmpi.so.3.2)
==8143== by 0x6F792F7: PMPI_Init (in /ixscratch/domw/Abingdon/IX/Branch/Main/rh3_x86_64_gcc-3.4.3/RelWithDebInfo/libmpi.so.3.2)
==8143== by 0x4A17F5D: PMPI_Init (libmpiwrap.c:2122)

==8143== Uninitialised byte(s) found during client check request
==8143== at 0x4A12547: PMPI_Get_count (libmpiwrap.c:902)
==8143== by 0x4A12CCF: maybe_complete (libmpiwrap.c:382)
==8143== by 0x4A140B9: PMPI_Waitany (libmpiwrap.c:1426)
.......

==8143== Invalid read of size 8
==8143== at 0x700F029: __I_MPI___intel_new_memcpy (in /ixscratch/domw/Abingdon/IX/Branch/Main/rh3_x86_64_gcc-3.4.3/RelWithDebInfo/libmpi.so.3.2)
==8143== by 0x7007ADB: __I_MPI__intel_fast_memcpy.J (in /ixscratch/domw/Abingdon/IX/Branch/Main/rh3_x86_64_gcc-3.4.3/RelWithDebInfo/libmpi.so.3.2)
......









0 Kudos
1 Reply
Gergana_S_Intel
Employee
873 Views

Hi Dominic,

Thanks for posting and welcome to the HPC forums.

As Patrick mentions in his Gmane post, support for Valgrind has been integrated into the MPI Correctness Checker - a tool that ships with the Intel Trace Analyzer and Collector, and does runtime error checking for your MPI code. Included is also a way to do distributed memory checking for your parallel app, via Valgrind, in similar vein to memchecker for Open MPI.

I would suggest giving that a try, as we have 30-day evaluation licenses available for download from the website. The documentation (section 4.2.2 "Running with valgrind") describes how to run it.

Let me know if perhaps I've misunderstood your request or if the functionality you're looking for is not included.

Looking forward to hearing back,
~Gergana

0 Kudos
Reply