I have Intel 82599 10Gb NIC which supports SR-IOV function. PF driver is v4.0.3, compiled with CFLAGS_EXTRA="-DIXGBE_ENABLE_VF_MQ" option. VF driver is v2.16.1 (both in host and guest machines). OS is SLES12 with 3.12.43 kernel. SR-IOV is fully set up, VFs are assigned to VMs and everything is working fine.
However, I've noticed that there are maximum of 4 TxRx queue pairs in host machine and only 2 TxRx queue pairs in VM guest machines, not matter which options are used while loading ixgbe driver. To me, this is a limiting factor when scaling VMs - assigning additional CPU cores does little if apps use a lot of network I/O and everything is handled by 2 cores...
Is this definitive limitation of ixgbe/ixgbevf drivers? Can number of queues be increased?
We are running 82599 10 Gigabit Dual Port NIC with the 4.1.2 Linux driver. Is it still the case that the driver only supports two queues pairs per VF? The driver documentation  seems to say the VFs will use pools with up to 8 queue pairs is this not correct? If not, are there plans to support more than 2 queue pairs per VF?
 http://www.intel.com/content/www/us/en/embedded/products/networking/82599-sr-iov-driver-companion-guide.html Intel® 82599 SR-IOV Driver Companion Guide