We have two Cisco C240 servers running Windows Server 2008 R2 Datacenter with X540 cards directly connected with cat 6A cord and they only run 0.1 Gbps (1%) with iperf. What am I doing wrong?
Intel site lists driver as PROWinx64.exe version: 18.4 (Latest) Date: 07/19/2013, but Cisco installed Intel driver 22.214.171.12401 7/11/2014 and it seems to work other than the performance.
Just want to clarify your setup. are you connecting one x540 to another x540 directly via cable without passing thru a switch? With this kind of setup, the performance yields are usually limited, maximum performance will only be attained when using many clients.
If not, can you also provide the iperf parameter?
We have very little server background so all this is new to us. My specialty is cabling and we are trying to understand the capabilities and limitations of various cabling and the 10G PHY's. We have two Cisco C240M3 and they have 1G motherboard PHY which we use for connection to our corporate network, but they also came with Broadcom dual port 10GbaseT PHY which we started testing with. We started with Ubuntu and Red hat and these easily performed at about 9 Gbps with iperf. The test is run between the two servers over Cat 6A cabling between the two 10G PHY.
Now we just purchased two of the Intel X540-T2 and swapped out the Broadcom PHY cards but also loaded Windows Server 2008 R2. I was doing essentially the same thing, but nothing so far seems to bring the speed up over 1/2 GBps. We start with the iperf defaults and then go through various buffer size and packet size and other adjustments looking for any improvement. It appears that either some setting on the PHY or some windows setting is limiting the throughput. We will swap the Broadcom PHY's back in to see if windows shows similar behavior, but I thought it would be best to consult Intel or Cisco to see if we need to be configuring something on either the card or the Windows OS.
I don't understand why it would restrict the throughput available to a single client. The previous tests did not do this, and it seems counter to having 10G connectivity. If there is some reason for this, it would be useful to understand. The X540 PHY driver was left as defaulted settings, although we noticed that Jumbo packets was disabled. Enabling this did not seem to change the results. Also after loading the Intel driver, I see it is the same as the Cisco driver.
Some internet discussion pointed to Symantec, but results are essentially the same with it disabled. Other discussion is about PCIe lanes, but I could not find anything showing how to query or configure this, however the X540 are installed in the same slot that the original Broacom PHY's were in.
Some searching found this, but my Windows experience only goes so far:
page 12 refers to this:
Thank you for the assistance.
I found an Intel blog showing how to look at the PCIe lanes and there are 8.
I tried the:
netsh int tcp set global chimney = enabled
still shows it and still shows the offload as INHOST
Is there a way to download a snapshot of the system settings, and maybe the problem will be clear to an Intel expert?