- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
we used a MPI library which was mpich2-like. So, ITAC ran well with it. But now this library switched to a OpenMPI-like implementation. Then ITAC is useless.
Is there possible to implement an ITAC for OpenMPI?
(For MKL Library, there is a BLACS for each implementation of MPI Library, then I suppose you would have done something like with ITAC).
Thank you,
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Officially, OpenMPI will not be supported (guess why). But as far as I know ITAC should work with OpenMPI as well.
Could you provide compilation command for your application, ITAC version, openMPI version. What does it mean "switched"? Have you recompiled your application?
Regards!
Dmitry
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'm sorry for this late answer. I'm provide here an example:
[cpp]#include "mpi.h"And here how I compile:
#include "VT.h"
int main(int argc, char *argv[])
{
int vt_handle;
MPI_Init(&argc, &argv);
VT_funcdef("Init", VT_NOCLASS, &vt_handle);
VT_begin(vt_handle);
printf("Hello World!n");
VT_end(vt_handle);
MPI_Finalize();
return 0;
}
[/cpp]
[bash]$ module listhere the result of the execution:
Currently Loaded Modulefiles:
1) intel/11.1.056(default) 2) openmpi/1.4.1 3) itac/7.2.2.006
[/bash]
[bash]$ mpicc -I$VT_ROOT/include -o prog.exe prog.c -L$VT_LIB_DIR -lVT $VT_ADD_LIBSI suppose there are some incompatibilities between symbols in ITAC and OpenMPI. I understand why you don't want to support OpenMPI, but when I saw MKL Scalapack supports OpenMPI, I hope you will do the same thing with ITAC (and some others products).
$ echo $VT_MPI
impi3
$ mpirun -np 2 ./prog.exe
[localhost:26547] *** Process received signal ***
[localhost:26547] Signal: Segmentation fault (11)
[localhost:26547] Signal code: Address not mapped (1)
[localhost:26547] Failing at address: 0x44000098
[localhost:26548] *** Process received signal ***
[localhost:26548] Signal: Segmentation fault (11)
[localhost:26548] Signal code: Address not mapped (1)
[localhost:26548] Failing at address: 0x44000098
[localhost:26547] [ 0] /lib64/libpthread.so.0 [0x3113e0e4c0]
[localhost:26547] [ 1] /applications/openmpi-1.4.1/lib/libmpi.so.0(MPI_Comm_dup+0xe7) [0x2b7d88af46ab]
[localhost:26547] [ 2] ./prog.exe(VT_IPCInit+0x17f) [0x42be03]
[localhost:26547] [ 3] ./prog.exe(VT_Init+0x351) [0x4a537d]
[localhost:26547] [ 4] ./prog.exe(MPI_Init+0xd6) [0x4371d6]
[localhost:26547] [ 5] ./prog.exe(main+0x41) [0x4194e1]
[localhost:26547] [ 6] /lib64/libc.so.6(__libc_start_main+0xf4) [0x311321d974]
[localhost:26547] [ 7] ./prog.exe(realloc+0x189) [0x4193e9]
[localhost:26547] *** End of error message ***
[localhost:26548] [ 0] /lib64/libpthread.so.0 [0x3113e0e4c0]
[localhost:26548] [ 1] /applications/openmpi-1.4.1/lib/libmpi.so.0(MPI_Comm_dup+0xe7) [0x2b2ebfce46ab]
[localhost:26548] [ 2] ./prog.exe(VT_IPCInit+0x17f) [0x42be03]
[localhost:26548] [ 3] ./prog.exe(VT_Init+0x351) [0x4a537d]
[localhost:26548] [ 4] ./prog.exe(MPI_Init+0xd6) [0x4371d6]
[localhost:26548] [ 5] ./prog.exe(main+0x41) [0x4194e1]
[localhost:26548] [ 6] /lib64/libc.so.6(__libc_start_main+0xf4) [0x311321d974]
[localhost:26548] [ 7] ./prog.exe(realloc+0x189) [0x4193e9]
[localhost:26548] *** End of error message ***
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 26547 on node localhost exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------[/bash]
Thank you for your answer.
Best Regards,
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
>I understand why you don't want to support OpenMPI, but when I saw MKL Scalapack supports OpenMPI, I hope you will do the same thing with ITAC (and some others products).
Unfortunately any new activity, feature, support requires human resources. OpenMPI support is not a high priority task for us at the moment, but of cause it's in the list.
Regards!
Dmitry
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Laurent,
As Dmitry mentions, we don't have any current plans to support OpenMPI. But, we do support MPICH. Thus, if OpenMPI has an MPICH-compatibility mode, you should be able to use ITAC over that.
Additionally, you can check compatibility between the binary interfaces of ITAC and OpenMPI by recompiling the provided examples/constants.c
file. Instructions on how that's done and the acceptable values are provided in section 1.2 System Requirements and Supported Features of the Intel Trace Collector Reference Manual. That way you can, at least, verify if ITAC will work with OpenMPI in the first place.
Regards,
~Gergana
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page