Ethernet Products
Determine ramifications of Intel® Ethernet products and technologies
5292 Discussions

Intel X540 T1 performance and bottleneck

EPerr1
Beginner
1,615 Views

Hello, I'm having trouble with getting sustainable file transfer performance RAM Disk to RAM Disk.

My set up is 2 Intel x540 T1 direct connected via Cat6a cables 3 feet in length. My systems are as follows:

Windows 10

Intel i7 3930 (six core)

Asia Sabertooth x79

32GB DDR 3 memory

Samsung 850 pro 500GB

GTX 580

Intel x540 t1 (in second 16x slot)

Server 2012 r2 essentials

AMD phenom x4 955 (quad core)

M5A99FX PRO

32 DDR 3 memory

Scan disk pro 240 GB

Intel X540 t1 (first 16x slot)

I've done both iperf and ntttcp testing to verify that I'm capable of getting over 1100MB/s.

My issue is actual file transfers. I enabled jumbo frames to their max matching the two machines, I've maxed all performance options, and I've messed with queues on each card finding that the max 16 Queues actually supplied the best performance. I even made changes to the queues core count setting on the server which I found had no change in performance.

I started with getting 300MB/s transfers back and forth from the server which was a let down for a 10gb dream. Somewhere in adjusting queues I got 500MB/s from the Windows 10 client to the server with a max of 300MB/s transfers from the server to the Windows 10 client. When looking at cup utilization I saw that when I wrote 500MB/s to the server, server utilization was in the 90% range with the "system" process hogging almost 80% of the cpu. On my Windows 10 client I saw less than 10% utilization. Now wen I transferred around 300MB/s to the Windows client I saw half the CPU utilization on the server and the the same under 10% on the Windows 10 client.

Now here's the exciting part. I went into my Windows client and changed the auto detect feature under link speed to 10Gb and I saw both 500MB/s to the server and to the Windows 10 client. I went to do the same on the server, but the option is greyed out not letting me adjust anything under link speed. Also, I'm not seeing consistent results. When ever I hit the 500 MB/s transfers cup usage on the server is in the 90% range and again less than 20% on the Windows 10 side. The thing is I don't always get those 500MB/s speeds.

I can do back to back copies with about 15 second in between. One from the client to the server, I'll get 500MB/s. The. Again from the server to the client at 500MB/s. I'll wait 15-30 seconds and do the same exact procedure. Most times I'll get 500MB/s and the occasional 300MB/s going to the server, but I'll constantly get 280-300MB/s going to the client. It's 50/50 whether I get 500MB/s or not.

Not only this, I'm still not hitting potential speeds! In fact, I'm getting half.

Here are my questions:

Is anyone else experiencing a similar issue?

Do you have similar hardware?

Will more cores (say an i7 3930) be better suited for the server that is only going to act as a NAS?

Does anyone recognize any bottlenecks?

(Forgot to mention that I never ran low on RAM even with the RAM drives)

Are there any driver tweaks out there that anyone can suggest?

I'm using the most current drivers for the NIC's is there a better one out there that functions with better performance?

Any other advice would be greatly appreciated, I'm new to networking, as I'm utilizing this in a home system for learning purposes, hence the simplistic setup, yet complicated issues.

Thanks ahead of time!

0 Kudos
1 Reply
SYeo3
Valued Contributor I
710 Views

Dear EricTN,

Thank you for contacting Intel.

Connecting two systems directly can cause bottleneck and this type of setup is recommended only for testing purposes. In the meantime, you may try to disable flow control, and offloading features in adapter's property - advanced settings. You may also check for other applications that may cause the slowness, such as firewall, anti-virus, etc.

Hope this help.

Sincerely,

Sandy

0 Kudos
Reply