Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2192 Discussions

Missing tmpfs mount on /dev/shm in SLES 10.2

davidet
Beginner
1,489 Views
We normally install and run the Intel MPI compiler for Linux on RedHat systems. When building and installing the compiler for a SLES 10.2 system, we encountered the following warning message:

WARN: Either the device /dev/shm was not found on your system or a mount entry wasn't found for it in the /etc/fstab file. Before using Intel MPI Library, Development Kit for Linux* OS, please make sure this device is present.

Sure enough, no mount point for /dev/shm is defined in /etc/fstab (RedHat defines a mount point like this: tmpfs /dev/shm tmpfs defaults 0 0). The simplest test on the SLES 10.2 system seems to run (I'm only a sysadmin, so my knowledge of MPI programming is very limited). However, this does not give me a warm feeling (a real MPI pgm might notice the difference).

My question is: has anyone installed the Intel MPI compiler on SLES 10, and if so what (if anything) has to be done to define the /dev/shm mount point?

Thanks in advance
0 Kudos
3 Replies
Gergana_S_Intel
Employee
1,489 Views
Hi David,

The Intel MPI Library requires the presence of the shared memory device so it can allocate the shared memory segment when you're using devices such as shm, ssm, and rdssm. The rdssm device is default. If instead you're using the sock or rdma devices, /dev/shm is not required on the system.

Is /dev/shm present in your system in the first place (regardless of the fstab entry)? For example, if you do 'ls /dev/ | grep shm', does it return anything?

If yes, then this is how we define the shm mount on our systems:
none /dev/shm tmpfs defaults 0 0
Then you can try an uninstall and a reinstall for Intel MPI to verify whether the warnings are still there.

Let me know how it goes. I can also provide you with a short set of directions on how to run a simple "Hello World" MPI program across the cluster.

Regards,
~Gergana
0 Kudos
davidet
Beginner
1,489 Views
Hi David,

The Intel MPI Library requires the presence of the shared memory device so it can allocate the shared memory segment when you're using devices such as shm, ssm, and rdssm. The rdssm device is default. If instead you're using the sock or rdma devices, /dev/shm is not required on the system.

Is /dev/shm present in your system in the first place (regardless of the fstab entry)? For example, if you do 'ls /dev/ | grep shm', does it return anything?

If yes, then this is how we define the shm mount on our systems:
none /dev/shm tmpfs defaults 0 0
Then you can try an uninstall and a reinstall for Intel MPI to verify whether the warnings are still there.

Let me know how it goes. I can also provide you with a short set of directions on how to run a simple "Hello World" MPI program across the cluster.

Regards,
~Gergana

Thanks for the quick response Gergana!

/dev/shm does exist on the SLES10.2 system, however it was created as a populated directory. Here's what it looks like:

ls -la /dev/shm
total 0
drwxrwxrwt 3 root root 60 2009-03-13 11:07 .
drwxr-xr-x 9 root root 6980 2009-03-13 11:07 ..
drwxr-xr-x 3 root root 300 2009-03-13 11:07 sysconfig

ls -la /dev/shm/sysconfig/
total 36
drwxr-xr-x 3 root root 300 2009-03-13 11:07 .
drwxrwxrwt 3 root root 60 2009-03-13 11:07 ..
-rw-r--r-- 1 root root 25 2009-03-16 10:03 config-eth0
-rw-r--r-- 1 root root 3 2009-03-16 10:03 config-lo
-rw-r--r-- 1 root root 61 2009-03-13 11:07 if-eth0
-rw-r--r-- 1 root root 37 2009-03-13 11:07 if-lo
-rw-r--r-- 1 root root 7 2009-03-13 11:07 ifup-eth0
-rw-r--r-- 1 root root 7 2009-03-13 11:07 ifup-lo
-rw-r--r-- 1 root root 3 2009-03-13 11:07 network
-rw-r--r-- 1 root root 8 2009-03-13 11:07 new-stamp-2
-rw-r--r-- 1 root root 8 2009-03-13 11:07 new-stamp-3
-rw-r--r-- 1 root root 0 2009-03-13 11:07 ready-eth0
-rw-r--r-- 1 root root 0 2009-03-13 11:07 ready-lo
-rw-r--r-- 1 root root 0 2009-03-13 11:07 ready-sit0
drwxr-xr-x 2 root root 60 2009-03-13 11:07 tmp

I had thought of simply mounting a tmpfs filesystem on /dev/shm (as your post indicates), but I was not sure what the underlying structure of /dev/shm is used for (and how mounting on top might affect the system).
0 Kudos
Gergana_S_Intel
Employee
1,489 Views
Hi David,

Ok, a little delay in my reply this time but I had a chat with our local cluster deployment experts.

Generally, there should be nothing in /dev/shm. It's an implementation of the shared memory concept, used to pass data between programs (in our case, Intel MPI would be using/dev/shm topass data between MPI processes). It's kinda like virtual storage.

If/dev/shm is indeed populated, go ahead and remove any contents and mount the tmpfs filesystem on top.

The guys I spoke with said that this is a known issue with the Moab cluster provisioning system, where it copies some sysconfig files in /dev/shm (which should be empty). Is that what you're using? If yes, I would suggest getting in touch with Cluster Resources so they're aware of the problem and include a fix in their scripts.

I hope this helps. Let me know how it goes.

Regards,
~Gergana
0 Kudos
Reply