- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This has been a topic of discussion previously [1][2], but I haven't seen any comment from anybody at Intel regarding it: is there anything that can be done about the poor performance of NFS on the MIC? I timed copying a 500 MB file from the host over NFS and got about 20MB/s, which is far too slow to drive a native application's I/O. I was hoping for at least an order of magnitude faster, even though the PCI express bus should be able to sustain at least 2 orders of magnitude more. Can it be done?
What is the recommended alternative to doing I/O natively? For example, should I be using SCIF with a small application running on the host that performs the I/O for the native application? Should I be using MPI? I was hoping that with NFS I could get away with not using any cores on the host, but it appears that might not be possible.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I found the following post: https://software.intel.com/en-us/articles/building-a-native-application-for-intel-xeon-phi-coprocessors which says "A good method for handling input and output of large data sets is to mount a folder from the host file system to the coprocessor and access the data from there." She uses the following options for mounting the NFS share:
host:/mydir /mydir nfs rsize=8192,wsize=8192,nolock,intr 0 0
I added these options and I do notice a bump in the throughput (it is now around 40MB/s), but still not enough to sustain the ~200 threads required on the MIC. I also notice that doubling the rsize and wsize given above is slightly better on my machine.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
That read number (20 MB/s) seems to be too low. Is this on MPSS 3.x software stack ? If yes, could you check to see whether tcp_sack gets turned off in *both* Phi and host. If it is, turn it *on* to see whether it makes any difference.
- [root] # /sbin/sysctl net.ipv4.tcp_sack /* check its default value */
- net.ipv4.tcp_sack = 0 /* it gets turned off */
- [root]# /sbin/sysctl net.ipv4.tcp_sack=1 /* turn it on */
- net.ipv4.tcp_sack = 1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
(sorry - remove duplicated post as I hit submit twice)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am using MPSS 3.2.1. tcp_sack = 1 on the Phi already. The host is windows, is there a setting there that I should verify? What bandwidth do you see on your cards over NFS?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The registry key
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\SackOpts
is not present, and from the documentation I gather that the default is on.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page