I am facing a scalability issue when using SR-IOV with multiple 82599ES NICs. I have 2 x520 Network adapters containing a total of 4 82599ES controllers. SR-IOV is enabled on each of them with "modprobe ixgbe max_vfs=8" option. The server used is Dell PowerEdge M620 with 4 KVM hosted VMs. Each VM has a VF added from a separate 82599 NIC.
The problem is that the combined RX+TX rate for 64 Bytes UDP packets doesn't extend beyond ~23MPPS.
So if only 1 VM is started, it can RX+TX at ~23 MPPS (almost ~11.5 MPPS each). However when second VM is booted, the rate for each VM becomes 5.5 MPPS for RX and 5.5 MPPS for TX (still a 23 MPPS combined RX+TX for both VMs). Kindly see the table below:
No. of VMs: 1 2 4
Each VM's RX rate (MPPS): 11.5 5.5 2.8
Each VM's TX rate (MPPS): 11.5 5.5 2.8
So a total of ~23MPPS in each case. Any idea where the bottleneck can be?
I have tried these optimizations/settings but no improvement is seen:
1) Guest Memory is backed by (mapped on) hugepages on Host. Memory is from same socket on which VM is running. Also sibling cores are used.
2) vCPUs have been pinned down to physical CPUs.
3) The MSI-X interrupts of the VF used are mapped on the respective VM cores.
4) SR-IOV global, Intel VT-d and IOAT is enabled in BIOS.
5) Updating the IXGBE driver on host to latest (3.22.3) version didn't improve this.
My system is Dell PowerEdge M620.
Host OS: RHEL 6.5
Guest OS: Ubuntu 13.10
IXGBE driver version: 3.15.1-k
I am running DPDK 1.6.0 on KVM hosted Guests.
Any help will be greatly appreciated.