Software Archive
Read-only legacy content
Announcements
FPGA community forums and blogs have moved to the Altera Community. Existing Intel Community members can sign in with their current credentials.
17060 Discussions

Poor NFS performance

John_F_1
Beginner
933 Views

This has been a topic of discussion previously [1][2], but I haven't seen any comment from anybody at Intel regarding it: is there anything that can be done about the poor performance of NFS on the MIC? I timed copying a 500 MB file from the host over NFS and got about 20MB/s, which is far too slow to drive a native application's I/O. I was hoping for at least an order of magnitude faster, even though the PCI express bus should be able to sustain at least 2 orders of magnitude more. Can it be done?

What is the recommended alternative to doing I/O natively? For example, should I be using SCIF with a small application running on the host that performs the I/O for the native application? Should I be using MPI? I was hoping that with NFS I could get away with not using any cores on the host, but it appears that might not be possible.

[1]: https://software.intel.com/en-us/forums/topic/382695

[2]: https://software.intel.com/en-us/forums/topic/404743

0 Kudos
5 Replies
John_F_1
Beginner
933 Views

I found the following post: https://software.intel.com/en-us/articles/building-a-native-application-for-intel-xeon-phi-coprocessors which says "A good method for handling input and output of large data sets is to mount a folder from the host file system to the coprocessor and access the data from there." She uses the following options for mounting the NFS share:

host:/mydir /mydir nfs rsize=8192,wsize=8192,nolock,intr 0 0

I added these options and I do notice a bump in the throughput (it is now around 40MB/s), but still not enough to sustain the ~200 threads required on the MIC. I also notice that doubling the rsize and wsize given above is slightly better on my machine.

0 Kudos
Wendy__C_
Beginner
933 Views

That read number (20 MB/s) seems to be too low. Is this on MPSS 3.x software stack ? If yes, could you check to see whether tcp_sack gets turned off in *both* Phi and host. If it is, turn it *on* to see whether it makes any difference.

  • [root] # /sbin/sysctl net.ipv4.tcp_sack  /* check its default value */
  • net.ipv4.tcp_sack = 0                         /* it gets turned off */
  •  
  • [root]# /sbin/sysctl net.ipv4.tcp_sack=1   /* turn it on */
  • net.ipv4.tcp_sack = 1
0 Kudos
Wendy__C_
Beginner
933 Views

(sorry - remove duplicated post as I hit submit twice)

0 Kudos
John_F_1
Beginner
933 Views

I am using MPSS 3.2.1.  tcp_sack = 1 on the Phi already.  The host is windows, is there a setting there that I should verify? What bandwidth do you see on your cards over NFS?

0 Kudos
John_F_1
Beginner
933 Views

The registry key

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\SackOpts

is not present, and from the documentation I gather that the default is on.

0 Kudos
Reply