- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
All,
I hope the Intel MPI experts here can help me out. Intel MPI 5.0.3.048 was recently installed on our cluster, a cluster that uses a GPFS filesystem. Looking at the release notes I saw that "I_MPI_EXTRA_FILESYSTEM_LIST gpfs" was now available. Great! I thought I'd try to see if I can see an effect or not.
However, I'm having trouble detecting whether it's on or not. I tried running a simple Hello World (no I/O, but simple) with I_MPI_DEBUG=9. When I do so, I get the usual splat of information but if I pass in "-genv I_MPI_EXTRA_FILESYSTEM on -genv I_MPI_EXTRA_FILESYSTEM_LIST gpfs" or not, I never see anything in the I_MPI_DEBUG output that says if I enabled it or not. I even tried I_MPI_DEBUG=100, but nothing.
Is there a way to know if this has been enabled? I was hoping to try to figure out an MPI-I/O benchmark that would let me see a difference, but if I can't tell if Intel MPI is actually enabling it, I'm a bit wary to thrash my disks without being sure.
Thanks,
Matt
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Also, as an aside, I think I found a bug in the Intel MPI Benchmarks 4.0 Update 2. If you look in IMB_g_info.c it has, on line 73:
char* VERSION="4.0 Update 1";
That didn't get changed. :)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hey Matt,
Thanks for pointing out the IMB Version discrepancy. We'll get that fixed :)
As far as GPFS support goes, I tried highest level debug for Intel MPI but it doesn't print the FILESYSTEM variables. I know we print all the I_MPI_INFO* settings and a few other ones that are needed for debugging purposes (like the fabric used, pinning scheme, etc.).
When GPFS support is enabled, Intel MPI reads the I_MPI_FILESYSTEM_LIST env variable and dynamically loads the libmpi_gpfs.so library. I suppose if you want to be extra sure, you can link with that manually for your executable.
Speaking of testing the GPFS support, I see you've already discovered the Intel MPI Benchmarks. I would compile and run the IMB-IO tests and run some of those micro benchmarks (e.g. P_Read_priv) with I_MPI_EXTRA_FILESYSTEM turned on and off to compare performance.
Let me know how that sounds and what your results look like.
Regards,
~Gergana
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hey Matt,
The IMB bug will be fixed in our next release.
I also spoke with the developer for the GPFS support and he said that I_MPI_DEBUG=10 will print the following info in regards to the ADIO driver selected:
[0] ADIO_Init(): Load support for GPFS file system
[0] ADIO_ResolveFileType(): Choose GPFS file system
I hope your tests with MPI-IO are going well. Since this is a brand new feature in Intel MPI, we'd love to get your feedback. You can either post here or email me directly.
Regards,
~Gergana
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page