- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am trying to use an MPI profiling library with executables that have been built using Intel MPI. This particular library (Darshan 2.2.0-pre1) can be preloaded with LD_PRELOAD to intercept I/O related MPI function calls. It provides a wrapper for each function of interest to collect statistics and then invokes the PMPI version of each function so that the program operates as usual.
Everything works great with C programs or C++ programs, whether I use the Intel compilers or GNU compilers.
Unfortunately, I am having problems with Fortran. My main concern is with programs built with the Intel Fortran compiler, using either "mpiifort" or "mpif90 -fc=ifort". The executables work fine, but when I try to use the LD_PRELOAD'ed Darshan library it fails to intercept the underlying MPI calls. In fact, I can't even find any MPI functions in the symbols for the executable using gdb or nm, though obviously MPI is working fine in my test program.
Can someone help me figure out what I am doing wrong? Is there any way to intercept the MPI calls at run time from a Fortran program built using the Intel MPI suite?
To my knowledge this approach usuually works fine for Fortran programs built using MPI libraries based on MPICH or OpenMPI, although at least in the former case you normally have to preload an additional library (like libfmpich.so) for it to work properly. I tried preloading a few of the Intel .so libraries before the Darshan library in case there was a similar issue with the Intel suite, but I did not have any luck.
Many thanks!Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'm currently looking into this issue. I have managed to replicate this on both the current release version of Darshan (2.1.2) and the pre-release version you are using. Are you able to intercept the calls if a different compiler is used with the Intel MPI Library?
Sincerely,
James Tullos
Technical Consulting Engineer
Intel Cluster Tools
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks,
-Phil
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Which mpif90 are you using? The one provided with the Intel MPI Library should automatically link in all of the necessary libraries. It looks like you're not getting /home/pcarns/working/impi-4.0.3.008/intel64/lib/libmpi.so linked in, or not in the correct order. It should be linked before libmpigf.so is linked.
In my test, I am unable to intercept with gfortran as the compiler. I'll see if I can pin down exactly what's happening here.
Sincerely,
James Tullos
Technical Consulting Engineer
Intel Cluster Tools
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Your compilation looks very similar to mine, with only a few differences in the include paths. For reference, here is what I get for the compilation command:
[bash][james@fxvm-jatullox01 io]$ mpif90 -show mpi_io.f90 -o test gfortran -ldl -ldl -ldl -ldl mpi_io.f90 -o test -I/opt/intel/impi/4.0.3.008/intel64/include/gfortran/4.4.0 -I/opt/intel/impi/4.0.3.008/intel64/include -L/opt/intel/impi/4.0.3.008/intel64/lib -L/opt/intel/impi/4.0.3.008/intel64/lib -Xlinker --enable-new-dtags -Xlinker -rpath -Xlinker /opt/intel/impi/4.0.3.008/intel64/lib -Xlinker -rpath -Xlinker /opt/intel/mpi-rt/4.0.3 -lmpi -lmpigf -lmpigi -lpthread -lpthread -lpthread -lpthread -lrt[/bash]
I am able to compile with no errors or warnings.
As far as intercepting the calls, I am contacting our developers for additional information.
Sincerely,
James Tullos
Technical Consulting Engineer
Intel Cluster Tools
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Phil,
Just out of curiosity, when you are attempting to intercept the Fortran calls, are you accounting for the name differences somewhere? Looking through the Darshan code I didn't see anywhere that this was done. As an example, look at MPI_FILE_WRITE_ORDERED. In C/C++, the name of the function is MPI_File_write_ordered. In Fortran (if seen from C), it is mpi_file_write_ordered_. That could be what you are encountering. Here is a small C++ code I used to intercept a Fortran call to MPI_FILE_WRITE_ORDERED.
[cpp]#include
I was able to preload the library generated by this and intercept a Fortran call to MPI_FILE_WRITE_ORDERED with no problem.
Sincerely,
James Tullos
Technical Consulting Engineer
Intel Cluster Tools
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Is there any shared entry point for C and Fortran bindings, or should a wrapper for the former catch MPI_File_open() while a wrapper for the latter catches mpi_file_open_()?
- Do these mpi_file_open_ etc. function names have exactly the same arguments as the C interface functions like MPI_File_open()?
- If I make a wrapper for mpi_file_open_, can it invoke an underlying pmpi_file_open_ after collecting profiling information? In other words, does it follow the convention of the MPI/PMPI function names in the C interface?
- Is the fortran function naming convention fairly stable across MPI releases (i.e., if one were to generate wrappers using those function names, will they likely still work in future Intel MPI releases?)?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Phil,
Shared entry points: I will need to check with the developers on this point, but for a more general use, I would recommend a separate interceptor for each.
Function arguments: No. In almost all MPI functions the Fortran call has an additional argument at the end for the return code. Also, the Fortran MPI interface is implemented mostly as subroutines with no return value rather than functions.
MPI/PMPI: I'll need to verify this with the developers as well. However, I believe that it will follow the same naming convention. I just modified the code I posted earlier to call PMPI_File_write_ordered instead. However, there are additional considerations I will mention shortly.
Naming convention: It is (should be) as stable as the MPI Standard, at least with regards to the MPI call structure.
Now, here's something to keep in mind. I don't know how much Fortran you use, so I'm going to start at a pretty basic level, forgive me if I sound like I'm lecturing. Fortran is case insensitive, and the names used in Fortran are mangled by the compiler to account for this. However, nothing defineshow a compiler should mangle names, and frequently compilers use different name mangling methods. These can also be configured via command line options at compile time. As a quick example, MPI_File_open in Fortran could really end up in any of these forms:
Possibly others depending on the exact compiler and options (these are the naming variants with which I'm most familiar). If you stay within one language, this distinction is mostly irrelevant. However, you are mixing languages, and thus it must be considered.
Hopefully that's clear. I'll check with the developers on some of the specifics, and let you know when I've got some more information.
Sincerely,
James Tullos
Technical Consulting Engineer
Intel Cluster Tools
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You will need to write a separate interface function for the C and Fortran calls. The Fortran and C functions will both call the same PMPI version for the actual work. This is similar to what the Intel Trace Analyzer and Collector does for profiling.
In order to simplify what you are doing, you could have your Fortran intercept call the C MPI API function (after appropriate conversions) and do the profiling completely within the C version. That will allow you to only maintain one set of profiling functionality, with two interfaces.
I hope this helps clarify what you'll need to do. Do you have any otherquestions?
Sincerely,
James Tullos
Technical Consulting Engineer
Intel Cluster Tools
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page