Ethernet Products
Determine ramifications of Intel® Ethernet products and technologies
4864 Discussions

SR-IOV on 82599EB 10G-Cards: Packet loss at rates over 1.19Mpps

RNeum1
Beginner
2,931 Views

Hello everybody,

We are trying to set up FPP with SR-IOV using different amounts of virtual functions on 82599EB 10G-Cards.

Unfortunately we get extensive packet loss (50-99%) receiving at rates over 1.19Mpps.

This does not happen if no virtual functions are used at all (rates up to 14.4Mpps with negligible packet loss).

We are using the following drivers:

ixgbe:

filename: /lib/modules/2.6.32-358.11.1.el6.x86_64/kernel/drivers/net/ixgbe/ixgbe.koversion: 3.16.1license: GPLdescription:Intel(R) 10 Gigabit PCI Express Network Driverauthor: Intel Corporation, <</span>mailto:linux.nics@intel.com linux.nics@intel.com>srcversion: F5543A9293AAC17C44DC73F

ixgbevf:

filename: /lib/modules/2.6.32-358.11.1.el6.x86_64/kernel/drivers/net/ixgbevf/ixgbevf.koversion: 2.6.0-klicense: GPLdescription:Intel(R) 82599 Virtual Function Driverauthor: Intel Corporation, <</span>mailto:linux.nics@intel.com linux.nics@intel.com>srcversion: 12EE2309BA821014C79DC53

Does someone have further information about this issue and about how to reach higher receiving speeds without packet loss?

Thank you and best regards,

Richard

0 Kudos
6 Replies
Patrick_K_Intel1
Employee
1,479 Views

Hi Richard,

Since you are referring to it as FPP, I am assuming you have read my papers and seen the videos on FPP. There is no voodoo there, no slight of hand I promise you. I achieve nearly line rate with the VF's.

However performance depends on a great many factors. First I see that you are not using the lastest VF driver - always suggest using the latest and greatest.

What kind of traffic are you sending? What are the packet sizes, are they going through a switch? How many VF's are you running, are they running concurrently? Is this VF to VF traffic on the same system? If so are the VF assigned to the same PF? What OS are you using?

If you can provide some more information I am confident we can get to the bottom of this.

thanx,

Patrick

0 Kudos
RNeum1
Beginner
1,479 Views

Hello Patrick,

thank you for your quick reply.

I have read your papers and watched your videos on FPP.

I'm going to test the issue with the latest ixgbe/ixgbevf drivers asap.

I am sending traffic generated by a slightly modified version of netmap's pkt-gen program (some highspeed traffic generator) which is using no virtualized interface (just one operated by the regular ixgbe).

I' sending always at full line rate (10 gigbit/sec) but using different packets sizes, please see the table below :

max_vfs pkts sent packet size packet per second pkts received pkts lost

2 10,000,000 1500 818.64K 10,000,000 0

2 10,000,000 1024 1.19M 10,000,000 0

2 10,000,000 512 2.33M 5964345 4035655

2 10,000,000 256 4.46M 3312994 6687006

2 10,000,000 128 8.20M 1782025 8217975

2 10,000,000 64 9.59M 1574879 8425121

As you can see from the table that I use 2 virtual functions assigned to the same physical function. I'm sending 10 Million packets in total. At the receiver I count the number of packets I receive. As you can see for packet size less than 1024 byte (packet rate 1.19 million packets per second=10gigbit/sec), I encounter significant packet loss.

The packets do not go through a switch, the two severs are directly connected. We use CentOS 6.4 64-bit version.

Best regards,

Richard

0 Kudos
Patrick_K_Intel1
Employee
1,479 Views

What is the CPU utilization at those smaller packet sizes?

While SR-IOV is faster in many ways than a virtualized adapter, one must remember that even though the VM is DMAing data directly to and from the physical hardware, those pesky interrupts are virtualized - meaning that the Hypervisor gets the interrupt for an incoming packet and then via software simulates that same interrupt up to the VM for processing.

Is the traffic you are sending 'streaming' - my guess is that it is, so the transmitter is probably blasting data as fast as possible not caring if the receiver can handle all that data. My guess is that the overhead from the virtual interrupts slows things down enough so that the buffers in the NIC are overrun, causing packet loss.

In an emulated situation that won't happen because the Hypervisor receives all the traffic and then sends it off to the VM, this is at the cost of CPU cycles however.

I'm going to be in class the rest of the week, so don't be offended if I don't reply to any subsequent updates until next week. :-)

- Patrick

0 Kudos
RNeum1
Beginner
1,479 Views

Hi,

the CPU utilization is negligible. top shows less than 0.4 % CPU utilization while receiving on each of the six cores.

We are, by the way, not using any virtual machine so far. We're running both, the PFs and VFs on the same operating system on physical hardware with six

Intel(R) Xeon(R) CPU X5650 @ 2.67GHz

and 6 GB RAM.

The used packet generator is indeed blasting out as many packets as possible. This is why we can only decrease the sending speed by increasing the packet size (see the table before). However this is no problem for ixgbe when no VFs are being used. I'll check out the interrupt counters to investigate this.

I'll update this post as soon I have further information.

Have a nice weekend,

Richard

PS: We're counting the packets on the receiver's side using the information provided by /proc/net/dev.

0 Kudos
Patrick_K_Intel1
Employee
1,479 Views

How are you separating the traffic on the VF's? Do you assign a different VLAN to each, or do you have them on different subnets?

thanx,

Patrick

0 Kudos
RNeum1
Beginner
1,479 Views

The flows are being separated by their respective destination MAC address only, which works fine so far.

The VFs are configured in promiscuous mode and do not have any IP configuration so far (No IP, subnet or VLAN tag).

Here's an example of our configuration

1: lo: mtu 16436 qdisc noqueue state UNKNOWN

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

6: eth4: mtu 1500 qdisc noop state DOWN qlen 1000

link/ether a0:36:9f:00:f8:d0 brd ff:ff:ff:ff:ff:ff

7: eth5: mtu 1500 qdisc noop state DOWN qlen 1000

link/ether a0:36:9f:00:f8:d1 brd ff:ff:ff:ff:ff:ff

8: eth6: mtu 1500 qdisc noop state DOWN qlen 1000

link/ether a0:36:9f:00:f8:d2 brd ff:ff:ff:ff:ff:ff

9: eth7: mtu 1500 qdisc noop state DOWN qlen 1000

link/ether a0:36:9f:00:f8:d3 brd ff:ff:ff:ff:ff:ff

10: eth8: mtu 1500 qdisc mq state UP qlen 1000

link/ether 00:25:90:54:a8:e4 brd ff:ff:ff:ff:ff:ff

11: eth9: mtu 1500 qdisc noop state DOWN qlen 1000

link/ether 00:25:90:54:a8:e5 brd ff:ff:ff:ff:ff:ff

216: eth0: mtu 1500 qdisc mq state UP qlen 1000

link/ether 00:1b:21:d5:70:e4 brd ff:ff:ff:ff:ff:ff

219: eth1: mtu 1500 qdisc mq state UP qlen 1000

link/ether 00:1b:21:d5:70:01 brd ff:ff:ff:ff:ff:ff

vf 0 MAC 00:1b:21:d5:70:02

vf 1 MAC 00:1b:21:d5:70:03

220: eth2: mtu 1500 qdisc mq state DOWN qlen 1000

link/ether 00:1b:21:d1:ca:7c brd ff:ff:ff:ff:ff:ff

221: eth3: mtu 1500 qdisc mq state DOWN qlen 1000

link/ether 00:1b:21:d1:ca:7d brd ff:ff:ff:ff:ff:ff

226: eth11: mtu 1500 qdisc noop state DOWN qlen 1000

link/ether 00:1b:21:d5:70:02 brd ff:ff:ff:ff:ff:ff

227: eth10: mtu 1500 qdisc noop state DOWN qlen 1000

link/ether 00:1b:21:d5:70:03 brd ff:ff:ff:ff:ff:ff

As you can see, we use two VFs on eth1 in this case, which show up as eth11 and eth10 in ifconfig. The packets are being sent to the MAC addresses 00:1b:21:d5:70:01, 00:1b:21:d5:70:02, 00:1b:21:d5:70:03 alternately, so that we receive about 1/3 of the packets sent in total on each of the devices, eth1, eth11 and eth10.

Best regards,

Richard

0 Kudos
Reply