Ethernet Products
Determine ramifications of Intel® Ethernet products and technologies
4811 Discussions

PCI passthrough of Ethernet Controller XL710 for 40GbE QSFP+

Jeipe
Beginner
2,683 Views

Has anyone succesfully used PCI passthrough for the Intel 40G interface?

I am trying this on Openstack/KVM. The device is passed through but data transfer fails.

In the same setup, PCI passthrough of Intel 10G ethernet interfaces works just fine.(82599ES 10-Gigabit SFI/SFP+ Network Connection)

0 Kudos
10 Replies
SharonT_t_Intel
Employee
1,481 Views

Hi opstkusr,

We will check on this, thanks for the post.

rgds,

wb

0 Kudos
Jeipe
Beginner
1,481 Views

Thx for the response.

I'll paste the dmesg output from the VM when the data TX is attempted:

[ 11.544088] i40e 0000:00:05.0 eth1: NIC Link is Up 40 Gbps Full Duplex, Flow Control: None

[ 11.689178] i40e 0000:00:06.0 eth2: NIC Link is Up 40 Gbps Full Duplex, Flow Control: None

[ 16.704071] ------------[ cut here ]------------

[ 16.705053] WARNING: CPU: 1 PID: 0 at net/sched/sch_generic.c:303 dev_watchdog+0x23e/0x250()

[ 16.705053] NETDEV WATCHDOG: eth1 (i40e): transmit queue 1 timed out

[ 16.705053] Modules linked in: cirrus ttm drm_kms_helper i40e drm ppdev serio_raw i2c_piix4 virtio_net parport_pc ptp virtio_balloon crct10dif_pclmul pps_core parport pvpanic crc32_pclmul ghash_clmulni_intel virtio_blk crc32c_intel virtio_pci virtio_ring virtio ata_generic pata_acpi

[ 16.705053] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 3.18.7-200.fc21.x86_64 # 1

[ 16.705053] Hardware name: Fedora Project OpenStack Nova, BIOS 1.7.5-20140709_153950- 04/01/2014

[ 16.705053] 0000000000000000 2e5932b294d0c473 ffff88043fc83d48 ffffffff8175e686

[ 16.705053] 0000000000000000 ffff88043fc83da0 ffff88043fc83d88 ffffffff810991d1

[ 16.705053] ffff88042958f5c0 0000000000000001 ffff88042865f000 0000000000000001

[ 16.705053] Call Trace:

[ 16.705053] [] dump_stack+0x46/0x58

[ 16.705053] [] warn_slowpath_common+0x81/0xa0

[ 16.705053] [] warn_slowpath_fmt+0x55/0x70

[ 16.705053] [] dev_watchdog+0x23e/0x250

[ 16.705053] [] ? dev_graft_qdisc+0x80/0x80

[ 16.705053] [] call_timer_fn+0x3a/0x120

[ 16.705053] [] ? dev_graft_qdisc+0x80/0x80

[ 16.705053] [] run_timer_softirq+0x212/0x2f0

[ 16.705053] [] __do_softirq+0x124/0x2d0

[ 16.705053] [] irq_exit+0x125/0x130

[ 16.705053] [] smp_apic_timer_interrupt+0x48/0x60

[ 16.705053] [] apic_timer_interrupt+0x6d/0x80

[ 16.705053] [] ? hrtimer_start+0x18/0x20

[ 16.705053] [] ? native_safe_halt+0x6/0x10

[ 16.705053] [] ? rcu_eqs_enter+0xa3/0xb0

[ 16.705053] [] default_idle+0x1f/0xc0

[ 16.705053] [] arch_cpu_idle+0xf/0x20

[ 16.705053] [] cpu_startup_entry+0x3c5/0x410

[ 16.705053] [] start_secondary+0x1af/0x1f0

[ 16.705053] ---[ end trace 7bda53aeda558267 ]---

[ 16.705053] i40e 0000:00:05.0 eth1: tx_timeout recovery level 1

[ 16.705053] i40e 0000:00:05.0: i40e_vsi_control_tx: VSI seid 519 Tx ring 0 disable timeout

[ 16.744198] i40e 0000:00:05.0: i40e_vsi_control_tx: VSI seid 520 Tx ring 64 disable timeout

[ 16.779322] i40e 0000:00:05.0: i40e_ptp_init: added PHC on eth1

[ 16.791819] i40e 0000:00:05.0: PF 40 attempted to control timestamp mode on port 1, which is owned by PF 1

[ 16.933869] i40e 0000:00:05.0 eth1: NIC Link is Up 40 Gbps Full Duplex, Flow Control: None

[ 18.853624] SELinux: initialized (dev tmpfs, type tmpfs), uses transition SIDs

[ 22.720083] i40e 0000:00:05.0 eth1: tx_timeout recovery level 2

[ 22.826993] i40e 0000:00:05.0: i40e_vsi_control_tx: VSI seid 519 Tx ring 0 disable timeout

[ 22.935288] i40e 0000:00:05.0: i40e_vsi_control_tx: VSI seid 520 Tx ring 64 disable timeout

[ 23.669555] i40e 0000:00:05.0: i40e_ptp_init: added PHC on eth1

[ 23.682067] i40e 0000:00:05.0: PF 40 attempted to control timestamp mode on port 1, which is owned by PF 1

[ 23.722423] i40e 0000:00:05.0 eth1: NIC Link is Up 40 Gbps Full Duplex, Flow Control: None

[ 23.800206] i40e 0000:00:06.0: i40e_ptp_init: added PHC on eth2

[ 23.813804] i40e 0000:00:06.0: PF 48 attempted to control timestamp mode on port 0, which is owned by PF 0

[ 23.855275] i40e 0000:00:06.0 eth2: NIC Link is Up 40 Gbps Full Duplex, Flow Control: None

[ 38.720091] i40e 0000:00:05.0 eth1: tx_timeout recovery level 3

[ 38.725844] random: nonblocking pool is initialized

[ 38.729874] i40e 0000:00:06.0: HMC error interrupt

[ 38.733425] i40e 0000:00:06.0: i40e_vsi_control_tx: VSI seid 518 Tx ring 0 disable timeout

[ 38.738886] i40e 0000:00:06.0: i40e_vsi_control_tx: VSI seid 521 Tx ring 64 disable timeout

[ 39.689569] i40e 0000:00:06.0: i40e_ptp_init: added PHC on eth2

0 Kudos
st4
New Contributor III
1,481 Views

Thanks for providing additional info.

rgds,

wb

0 Kudos
Jeipe
Beginner
1,481 Views
0 Kudos
st4
New Contributor III
1,481 Views

Hi opstkusr,

We are still checking on this. Will update you if there is further finding. thank you for your time on this matter.

rgds,

wb

0 Kudos
Jeipe
Beginner
1,481 Views

Hi,

Is there a possibility that i could get an update on this any time soon? At

least as to whether this is a known i40e driver issue or something else?

I have updated the i40e driver and the XL710 firmware to the latest

available as per recommendation from the e1000 devel group and still no

change in behavior.

# ethtool -i p785p1

driver: i40e

version: 1.2.37

firmware-version: f4.33.31377 a1.2 n4.42 e1930

bus-info: 0000:09:00.0

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: yes

I am stuck at this point and need help from you to address this.

Would really appreciate any help in addressing this issue or at least

indicating if this is a known issue that is being worked on.

Thanks

Jacob

0 Kudos
st4
New Contributor III
1,481 Views

Hi Jacob,

We are still checking on this, apologize for any inconvenience this might have caused. will update you as soon as there is any finding.

thanks,

wb

0 Kudos
st4
New Contributor III
1,481 Views

Hi Jacob,

Please send the following output by attachment for us to further investigate on this.

Linux Kernel log (please make sure to send the entire log)

KVM log

Linux "dmidecode" command output.

Thanks,

wb

0 Kudos
Jeipe
Beginner
1,481 Views

do not see this issue after upgrading the firmware and using the latest i40e driver as per reccomendeation from intel e1000 mailing list.

The remaining issue is that dpdk 1.8.-0 doesn't work with these devices from the vm

0 Kudos
SYeo3
Valued Contributor I
1,481 Views

Hi opstkusr,

Our software engineer suggested that you use the listed modules for your adapter.

Kindly see his reply to you at http://sourceforge.net/p/e1000/mailman/message/33617294/ Sourceforge below:

From the README:

"

SFP+ Devices with Pluggable Optics

----------------------------------

SR Modules

----------

Intel DUAL RATE 1G/10G SFP+ SR (bailed) FTLX8571D3BCV-IT

Intel DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDZ-IN2

LR Modules

----------

Intel DUAL RATE 1G/10G SFP+ LR (bailed) FTLX1471D3BCV-IT

Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDZ-IN2

QSFP+ Modules

-------------

Intel TRIPLE RATE 1G/10G/40G QSFP+ SR (bailed) E40GQSFPSR

Intel TRIPLE RATE 1G/10G/40G QSFP+ SR (bailed) E40GQSFPLR

QSFP+ 1G speed is not supported on XL710 based devices.

X710/XL710 Based SFP+ adapters support all passive and active limiting direct

attach cables that comply with SFF-8431 v4.1 and SFF-8472 v10.4 specifications.

"

Please keep in mind that the driver check for modules is done in the firmware, not in the driver. At this time you have to use the listed modules to get connectivity.

Todd Fujinaka

Software Application Engineer

Networking Division (ND)

Intel Corporation

===============

With this, we would like to request for the markings of the network card and modules you're using. You may send these details via private message (PM) to me. We look forward to your reply.

Sincerely,

Sandy

0 Kudos
Reply