- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
In chasing down bottlenecks in our native application for Xeon Phi, I run a test to understand NFS and network performance.
Surprisingly, NFS shows throughput to the host of only 13-16 MB/sec - one can see the memory usage on the card go quickly up as the buffers are filled and then the file write slows down.
A measurement of TCP and UDP throughput with netcat showed only 20 MB/sec. Using netcat source/receive on the same Xeon Phi (i.e. local transfers only) showed 27-28 MB/sec.
For comparison the BusSpeedDownload_pragma and BusSpeedReadback_pragma show transfer rates of at least 100 MB/sec in the slowest case (1KB size) and go up to 6 GB/sec.
Any suggestions on what settings I need to change to improve NFS performance ?
Thank you !
Vladimir Dergachev
Link Copied
1 Reply
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
To check that the copy utilities are not a problem I ran dd from /dev/zero to /dev/null which shows ~1 GB/sec throughput:
[vdergachev@phi1 vdergachev]$ time dd if=/dev/zero of=/dev/null bs=10000000 count=1000
1000+0 records in
1000+0 records out
10000000000 bytes (9.3GB) copied, 9.889527 seconds, 964.3MB/s
This shows that there is something really inefficient in networking code.

Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page