Two servers (MB S5520HC, CPU Xeon E5630, RAM 8GB, Intel 10 Gigabit AT Server Adapter) are connected directly with cat 6 cable via 10GbE interface. Windows 2008R2 with latest updates, latest driver versions are installed on both of the servers. Also the BIOS's are updated to the latest 55 version.
I performed bandwidth tests with iperf and the maximum value that I could get was 6 gigabits per second.
The NICs are OK. If I put them to computers with core i5 processors and gigabyte motherboards, with the same Windows OS version and the same NIC drivers – I am able to get 9.5 gigabits per second on the same ipef tests.
With the Intel servers described above, I tired different settings for BIOS and driver parameters, I tried 54 and 55 BIOS versions and several driver versions. Also I tried to install SELES instead of Windows 2008R2, and Windows 2003R2 SP2 with the latest drivers and updates, but again the maximum value was 6 Gb/s.
Also I performed the following test: installed iSCSI server on one of the servers, created RAM disk, connected it to the second server via 10GbE interface, performed iometer tests with different settings (workers 8-16, # of outstanding I/Os 1-64, 64K-256K sequential read and write test) and again the maximum performance was 750MB/s (or 6Gb/s) and the network utilization in task manager was 60%.
The NICs are in the tested hardware list:
but the configuration does not work properly…
I do not know if this is the case, but one possibility is that the server where the tests show slower throughput has energy saving states that are being enabled. You could check the BIOS settings to see if you have options to disable PStates and LStates to see if that has a positive affect on throughput. I am not familiar with the server, so I can't guide you to the exact settings that might be available.
I cannot explain the big performance difference between the two systems, but you might be able to find a setting on the adapter that will make a difference in the performance. By the way, tweaking the settings while running test programs might not give you the best performance with your real world application. You may want to go through the tuning again when you run real loads on the server.
Have you experimented with different advanced settings for the adapter. For example, have you tried changing the Interrupt Throttle rate from adaptive to medium (or higher or lower.) You could also experiment with increasing or decreasing RSS queues, and increasing transmit and receive buffers. Another thing you could experiment with is turning off flow control and different offload options to see if one of those options is causing trouble for you on that system.
http://Tuning Intel® Ethernet Adapter throughput performance Tuning Intel® Ethernet Adapter throughput performance is another place to look. I think I already covered the adapter configuration options above, but you might find some other ideas on the page or in one of the linked pages at the end.
I hope this helps. I am looking forward to hearing about your results.
were the first steps in my explorations....
Yes, I tried changing of Interrupt Moderation Rate to Low, Minimal, or Off; I tired switching on/off jumbo frames and flow control; I tried increasing and decreasing RSS queues; and increasing buffers to 2048 as it is suggested in the article above.
The only suggesting I cannot check is "For Intel® PRO/10 GbE Network Adapters confirm If your BIOS has an MMRBC (Maximum Memory Read Byte Count) adjustment, change it from its default (usually 512) to 4096 (maximum)." as I don't see it in my BIOS options.
These adapters work perfect on two systems with gigabyte P55-UD3L, intel corei5-650 CPU connected directly with cat 6 cable. I do not believe that the configuration with Xeon CPUs and S5520HC is unable to show the same results...