Ethernet Products
Determine ramifications of Intel® Ethernet products and technologies
5225 Discussions

Intel NIC 82599 EB enable SR-IOV and multiqueue

idata
Employee
7,385 Views

 

Hi,

I have a NIC card 82599EB 10-Gigabit. I want to enabl SR-IOV and multiqueue on this NIC. My OS is fedroa 18.

If I load the driver ixgbe like this "modprobe ixgbe", then I can see the multiqueue is enabled.

Multiqueue Enabled: Rx Queue count=32, Tx Queue count=32

But I enable SR-IOV, load the driver by this "mdprobe ixgbe max_vfs=2", then multiqueue is disabled.

Multiqueue disabled: Rx Queue count=1, Tx Queue count =1

So how can I enable SR-IOV and multiqueue both? This is really import for me.

0 Kudos
5 Replies
Patrick_K_Intel1
Employee
5,024 Views

I pestered my favorite expert on this topic and he came back with a great answer:

Intel 10Gb CNA drivers are now available from http://sourceforge.net/projects/e1000/files/ http://sourceforge.net/projects/e1000/files/.

 

Please download the following drivers:

  1. Physical function (PF) driver - "ixgbe" version 3.18.7 located in "ixgbe stable" folder.
  2. Virtual function (VF) driver - "ixgbevf" version 2.11.3 located in "ixgbevf stable" folder.

This driver set supports VF Multi queue and RSS features. PF driver supports 4 TX/RX queue pairs and

 

VF supports 2 TX/RX queue pairs.

You will need the following command line for compiling the driver. Untar the driver and go into ixgbe-3.18.7/src folder

 

"make CFLAGS_EXTRA="-DIXGBE_ENABLE_VF_MQ" install"

This command will compile and install the driver.

This driver will only work with older Linux kernels like the ones included in RHEL 5.x distributions.

 

"ethtool –S" will not show multiple queues being utilized with these drivers .

 

This is a ethtool and driver interaction issue. You will need to examine /proc/interrupts to ensure PF in the host OS is utilizing 4 TX/RX queue pairs and VF in the Guest OS is utilizing 2 TX/RX queue pairs.

Hope that does what you are needing!

- Patrick

idata
Employee
5,024 Views

Thanks for your reply. Now we need the VF support at least 8 queues. Is that possible?

0 Kudos
Patrick_K_Intel1
Employee
5,024 Views

Sorry, Only 2 queue pairs per VF are available.

0 Kudos
idata
Employee
5,024 Views

From the PF driver (ixgbe_sriov.c). The PF always return 1 as max Tx/Rx queue number, regardlss of VMQ/RSS parameters. Does this mean VF will always have at most 1 Tx/Rx pair available? If so, is there any plan to fix it?

0 Kudos
Brian_J_Intel1
Employee
5,024 Views

Here are the steps to configure multiple rx queues for VFs on ixgbe for VMs with 2 or more vCPUs.

Use the multiqueue=1 when bring up the ixgbe driver AND have the PF port linked and up before configuring the number of VFs. This will enable 2 rx and 2 tx queues per VF.

If you want more rx queues then you have to enable dcb with pfc of 8 traffic classes using dcbtool but realistically this does not provide much advantage so the 2 queue configuration is recommended.

Here are the steps…

rmmod ixgbe

modprobe ixgbe multiqueue=1

# Make sure that the port is up before adding vfs

ip link set up ens2f0

echo 8 > /sys/class/net/ens2f0/device/sriov_numvfs

dmesg

...

 

[440989.505373] ixgbe 0000:02:00.0 ens2f0: NIC Link is Up 10 Gbps, Flow Control: RX/TX

[441197.069094] ixgbe 0000:02:00.0: SR-IOV enabled with 8 VFs

[441197.069102] ixgbe 0000:02:00.0: configure port vlans to keep your VFs secure

[441197.069336] ixgbe 0000:02:00.0: removed PHC on ens2f0

[441197.093965] ixgbe 0000:02:00.1 ens2f1: NIC Link is Down

[441197.171232] ixgbe 0000:02:00.0: registered PHC device on ens2f0

[441197.339674] ixgbe 0000:02:00.0 ens2f0: detected SFP+: 3

[441197.379140] pci 0000:02:10.0: [8086:10ed] type 00 class 0x020000

[441197.379534] ixgbevf 0000:02:10.0: enabling device (0000 -> 0002)

[441197.443177] ixgbe 0000:02:00.0 ens2f0: VF Reset msg received from vf 0

[441197.443469] ixgbe 0000:02:00.0: VF 0 has no MAC address assigned, you may have to assign one manually

[441197.459610] ixgbevf 0000:02:10.0: MAC address not assigned by administrator.

[441197.459616] ixgbevf 0000:02:10.0: Assigning random MAC address

[441197.460335] ixgbevf 0000:02:10.0: Multiqueue Enabled: Rx Queue count = 2, Tx Queue count = 2

[441197.460915] ixgbevf: eth0: ixgbevf_probe: Intel(R) 82599 Virtual Function

[441197.460918] 9e:cc:ab:57:53:ab

[441197.460930] ixgbevf: eth0: ixgbevf_probe: GRO is enabled

[441197.460933] ixgbevf: eth0: ixgbevf_probe: Intel(R) 10GbE PCI Express Virtual Function Driver

[441197.460976] pci 0000:02:10.2: [8086:10ed] type 00 class 0x020000

[441197.461214] ixgbevf 0000:02:10.2: enabling device (0000 -> 0002)

[441197.463433] ixgbevf 0000:02:10.0 enp2s16: renamed from eth0

[441197.523143] ixgbe 0000:02:00.0 ens2f0: VF Reset msg received from vf 1

[441197.523421] ixgbe 0000:02:00.0: VF 1 has no MAC address assigned, you may have to assign one manually

[441197.528792] ixgbe 0000:02:00.0 ens2f0: NIC Link is Up 10 Gbps, Flow Control: RX/TX

# Use ethtool -S to see 2 queues

mailto:root@nd-2312-1:/usr/src/ixgbe-5.2.4/src root@nd-2312-1:/usr/src/ixgbe-5.2.4/src# ethtool -S enp2s16

NIC statistics:

rx_packets: 4

tx_packets: 0

rx_bytes: 240

tx_bytes: 0

tx_busy: 0

tx_restart_queue: 0

tx_timeout_count: 0

multicast: 4

rx_csum_offload_errors: 0

tx_queue_0_packets: 0

tx_queue_0_bytes: 0

tx_queue_0_bp_napi_yield: 0

tx_queue_0_bp_misses: 0

tx_queue_0_bp_cleaned: 0

tx_queue_1_packets: 0

tx_queue_1_bytes: 0

tx_queue_1_bp_napi_yield: 0

tx_queue_1_bp_misses: 0

tx_queue_1_bp_cleaned: 0

rx_queue_0_packets: 4

rx_queue_0_bytes: 240

rx_queue_0_bp_poll_yield: 0

rx_queue_0_bp_misses: 0

<span style="fon...

0 Kudos
Reply