Ethernet Products
Determine ramifications of Intel® Ethernet products and technologies
5161 Discussions

My ixgbe driver seems to ignore RSS setting and creates 24 RX and TX queues

idata
Employee
4,818 Views

I have an Intel 10G PCIe optical network card that I use in conjunction with an ixgbe driver in linux (3.0.12-NAPI). The firmware version reported by ethtool is 0.9-3. I'm using RHEL6, with kernel 2.6.32., and here's my ixgbe.conf:

options ixgbe MQ=1,1options ixgbe RxBufferMode=0,0options ixgbe InterruptThrottleRate=1,1options ixgbe LLIPush=1,1options ixgbe LLISize=386,386options ixgbe RSS=10,10

The documentation (http://downloadmirror.intel.com/14687/eng/README.txt http://downloadmirror.intel.com/14687/eng/README.txt) states that RSS can be set to 0-16, and the number of RX/TX queues created will be the lesser of 16 and the number of CPU cores available. On my 24 core server, I see that 24 queues are created, when my config above states that only 10 should be created.

Any ideas why this might be happening? Here's dmesg output:

$ dmesg | grep ixgbe

ixgbe 0000:15:00.0: PCI INT A -> GSI 24 (level, low) -> IRQ 24

ixgbe 0000:15:00.0: setting latency timer to 64

ixgbe: Multiple Queue Support Enabled

ixgbe: Receive-Side Scaling (RSS) set to 10

ixgbe: 0000:15:00.0: ixgbe_check_options: dynamic interrupt throttling enabled

ixgbe: Low Latency Interrupt on Packet Size set to 386

ixgbe: Low Latency Interrupt on TCP Push flag Enabled

ixgbe: Rx buffer mode set to 0

ixgbe: 0000:15:00.0: ixgbe_check_options: Flow Director hash filtering enabled

ixgbe: 0000:15:00.0: ixgbe_check_options: Flow Director allocated 64kB of packet buffer

ixgbe: 0000:15:00.0: ixgbe_check_options: ATR Tx Packet sample rate set to default of 20

ixgbe: 0000:15:00.0: ixgbe_check_options: FCoE Offload feature enabled

ixgbe 0000:15:00.0: irq 59 for MSI/MSI-Xixgbe 0000:15:00.0: irq 60 for MSI/MSI-Xixgbe 0000:15:00.0: irq 61 for MSI/MSI-Xixgbe 0000:15:00.0: irq 62 for MSI/MSI-Xixgbe 0000:15:00.0: irq 63 for MSI/MSI-Xixgbe 0000:15:00.0: irq 64 for MSI/MSI-Xixgbe 0000:15:00.0: irq 65 for MSI/MSI-Xixgbe 0000:15:00.0: irq 66 for MSI/MSI-Xixgbe 0000:15:00.0: irq 67 for MSI/MSI-Xixgbe 0000:15:00.0: irq 68 for MSI/MSI-Xixgbe 0000:15:00.0: irq 69 for MSI/MSI-Xixgbe 0000:15:00.0: irq 70 for MSI/MSI-Xixgbe 0000:15:00.0: irq 71 for MSI/MSI-Xixgbe 0000:15:00.0: irq 72 for MSI/MSI-Xixgbe 0000:15:00.0: irq 73 for MSI/MSI-Xixgbe 0000:15:00.0: irq 74 for MSI/MSI-Xixgbe 0000:15:00.0: irq 75 for MSI/MSI-Xixgbe 0000:15:00.0: irq 76 for MSI/MSI-Xixgbe 0000:15:00.0: irq 77 for MSI/MSI-Xixgbe 0000:15:00.0: irq 78 for MSI/MSI-Xixgbe 0000:15:00.0: irq 79 for MSI/MSI-Xixgbe 0000:15:00.0: irq 80 for MSI/MSI-Xixgbe 0000:15:00.0: irq 81 for MSI/MSI-Xixgbe 0000:15:00.0: irq 82 for MSI/MSI-Xixgbe 0000:15:00.0: irq 83 for MSI/MSI-Xixgbe: 0000:15:00.0: ixgbe_init_interrupt_scheme: Multiqueue Enabled: Rx Queue count = 24, Tx Queue count = 24ixgbe: eth2: ixgbe_probe: No DCA provider found. Please start ioatdma for DCA functionality.ixgbe: eth2: ixgbe_probe: (PCI Express:5.0Gb/s:Width x8) 00:1b:21:6e:cc:36ixgbe: eth2: ixgbe_probe: MAC: 2, PHY: 15, SFP+: 5, PBA No: E68787-002ixgbe: eth2: ixgbe_probe: GRO is enabledixgbe: eth2: ixgbe_probe: HW RSC is enabled ixgbe: eth2: ixgbe_probe: Intel(R) 10 Gigabit Network Connectionixgbe: eth2: ixgbe_sfp_detection_subtask: detected SFP+: 5ixgbe: eth2: ixgbe_watchdog_link_is_up: NIC Link is Up 10 Gbps, Flow Control: RX/TXixgbe: eth2: ixgbe_sfp_detection_subtask: detected SFP+: 5ixgbe: eth2: ixgbe_upd...
3 Replies
idata
Employee
2,892 Views

Interestingly, looking at interrupt counts in /proc/interrupts, it appears that only the first 10 cores are processing the receive packets. So, it seems that the RSS=10,10 is in fact having the intended effect. However, I still see the interrupt counts for cores 11 through 24 increase, although much less than for cores 1 through 10. My application under test has a significant amount of received network traffic, and very less transmitted traffic. This would mean that "RSS" is effecting the receive scaling only, but not transmit scaling. So, now my problem is to find out how to restrict the number of TX queues to 10.

Mark_H_Intel
Employee
2,892 Views

I asked one of the developers about your question. He thinks that maybe the flow director (fdir) is setting up 1 queue per CPU. Try it with Fdir turned off (FdirMode=0,0) and let us know the results.

Mark H

idata
Employee
2,892 Views

Yes, this had the intended effect. I now have exactly 10 RX and 10 TX queues, consistent with the RSS=10,10 (for the two NICs) setting in ixgbe.conf. Thank you for a quick and helpful response.

0 Kudos
Reply