Greetings, Intel Community Members.
I have been testing a direct connection between two eSXI 6.5 hosts both with Intel 10 Gbit FA DA cards in them.
I have tried both PCI Passthrough and normal and no matter what I do, no matter how many settings I change in the drivers, windows settings you name I am unable to get over 1.56gbps between the two.
Before you suggest a bottleneck of disks, they both do 900MB/sec Read and 600MB/sec Write so it is not that. I am at a loss, tried other cables etc and I am just stumped.
I am using iperf to do the tests between the two but like I said it literally will not budge above 1.56gbps. The cards are both in pcie 3.0 x8 slots, I have also went into both bios and disabled the "Memory Mapped IO above 4GB" setting.
Any clues or ideas would be appreciated.
Can you help clarify below information:
1) What is the Windows operating system installed in the eSXI 6.5 host?
2) What is the driver version used? And where did you download the driver?
3) Please double check if the two NICs have the same setting?
Ahh, I replied to the email I guess it doesn't update the thread.
Here is answers
The two nics have the same settings, the driver is the ESXI driver ixgbe-4.5.2-2494585-offline_bundle-6073106.zip I downloaded the driver from vmware.com but i have also tried 4.5.1 and 3.x drivers
I am not passing through the NIC to the Windows OS. ESXi presents it to windows via virtual switch. I have tried it directly in the past but it had the same problems, I used the drivers that come with Windows on that particular instant because I could find no new ones online.
The OS is Windows Server 2012 R2 though I have tried 2012 and windows 10 as well.
I have two of the nics one in that ESXi server and the other in a different ESXi server and I direct connect them both are ESXi 6.5 exact same version. Both in PCIE 3.0 x8 lanes.
Please help check the following:
1) Please confirm if you have tried configure the jumbo frame to 9 KB
2) Set the ITR to low as mentioned at https://www.intel.com/content/www/us/en/support/network-and-i-o/ethernet-products/000005811.html
3) Please confirm if the ESXI version used is the stable release version, have you tried contacting VMWare support?
4) Please share the command on how you test the throughput in Iperf
There is no "ITR" as an option in the VMXNET3 Nic Driver.
There is Interrupt Moderation which I have disabled in the past when testing, I have tried nearly every combination of available options on the advanced page.
for iperf i do the following
On one vm on a seperate host I use a VM that is on an SSD and launch the iperf server side with iperf3.exe -b IP_ADDR -s
Then on the client machine on seperate host again with an SSD I run it as so iperf-3.1.3-win64\iperf3.exe -B x.x.x.x -c z.z.z.z -P 16 -T 30 -w 1M
For the -w flag I have tried from 32k to 10M
I havfe always done all testing via Jumbo Frames, and all of the "switches" virtual switches have an MTU of 9000 as well.
Thank you for the reply and additional information provided. I will have to check on this. Please confirm if the ESXI version used is the stable release version, have you tried contacting VMWare support?
Please confirm if you are using stable release version of ESXi 6.5 and have tried contacting VMWare support regarding the issue ? Feel free to update me. Thank you.
It is the current stable release and we have not contacted VMWare as we have a license that requires us to pay a per-incident fee. I have just decided to order 3 new 10gbit nics. Solarflare which do have intel chipsets but different ones, they are on the compatibility list as were your nics so I hope it works as advertised.
You can consider this thread closed.