my research lab has recently acquired some machines for performance testing. The machines are:
Intel DH55TC motherboard (latest BIOS)
Intel Core i3-530
Intel Gigabit CT Desktop PCI-E x1 NIC (82574L)
When we conduct iperf tests between the NIC and another host (with the 82574L as the client, sending packets), we can never get line rate. The maximum speed we seem to be able to achieve is around 700Mbps. We have tried changing interrupt affinities, TX ring sizes, etc. We have tried Debian Squeeze (2.6.32-5 kernel) as well as CentOS 5.5 (some variant of 2.6.18). We're using e1000e version 1.2.10-NAPI.
The on-board NIC (82578DC) on the same motherboard can achieve line rate with ease.
Pointers are appreciated. If more information is needed please let me know. Thanks!
Edit: I should also mention that I have tried the 82574L NIC on a different machine with a different Intel board (don't know which right now) with Debian 2.6.26. It did not achieve line rate on that machine either.
Message was edited by: yuethomas
I am not aware of any specific issues with this adapter that would explain what you see. You could try experimenting with InterruptThrottleRate (see the REAMDE in the tar file). Maybe the issue is motherboard related. Try a different PCIe slot or updating the BIOS.
Let us know what you find out.
Thanks for the information, Mark. We also suspected this was related to interrupts, so I set the InterruptThrottleRate to 1 when loading the driver. (Is there a way to check and see if this was applied correctly?) Did not seem to make a difference.
We've tried different PCIe slots; we're on the latest BIOS revision for the motherboard.
I also checked interrupt and CPU usages. I loaded the module with default settings (which means InterruptThrottleRate = 3), and usedwatch -n 1 cat /proc/interrupts
to monitor the number of interrupts generated. I found a couple of interesting things:
- InterruptThrottleRate = 3 is supposed to control bulk traffic interrupts at 4000/second, according to the readme. The actual rate was around 20000.
- There are two interrupts for the NIC, one at 32, one at 33:
- The one at 32 is called "eth1-Q0", and this is the one that increments at around 20000 per second. This interrupt fires to CPU0 exclusively when the smp_affinity mask is set to 000f. If I set the mask to 0002, it does fire to CPU1 correctly.
- The one at 33 is called "eth1", and does not seem to increment at all. (Stayed constant at 2 during my iperf test.)
- CPU usage is minimal. About 4% of one CPU is spent on softirq for the NIC; all other CPUs are essentially idle.
Sorry about the delay in getting back to you. (I took a vacation.) I don't know if you are still doing any testing, but I will pass on a few notes just in case you find them useful.
I was told by one of the factory engineers that you "should be able to check the setting by using "ethtool –c ethX" and looking at rx-usecs. The interrupt throttle rate will either be a single digit or the number of microseconds (1/interrupt throttle rate)." This might help you in seeing the result of changing the ITR setting.
You are using one of the x1 PCI express slots, right? You could try the other slot, but both of the x1 slots and the onboard LAN go through the PCH according to the block diagram in the board http://downloadmirror.intel.com/18505/eng/DH55TC_TechProdSpec.pdf TPS.
Are you using http://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&DwnldID=19226&lang=eng the latest BIOS for your desktop board? I notice that the BIOS release notes have updates to the PCH reference code. I am not a desktop board expert, but that might affect the adapter performance.