Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
1,535 Views

i40e Ethernet Connection XL710 Network Driver - version 1.5.10-k 2.6.32-696 not loading correctly

Has anyone ran into a similar issue. After a yum update to kernel 2.6.32-696.3.2.el6.x86_64 or 2.6.32-696. My bond2 interface stops working correctly.

After that I am unable to set speed settings and unable to ping anything. This is causing my NFS shares to stop working, as they are being mounted via that nic.

When I roll back to 2.6.32-642.13.1.el6.x86_64 starts working right away.

from dmesg it looks like the kernel is unable to detect that we are using 10Gbps cards. How do I proceed with reporting this bug?

======================================================

2.6.32-696.3.2.el6.x86_64

======================================================

# modinfo i40e

filename: /lib/modules/2.6.32-696.3.2.el6.x86_64/kernel/drivers/net/i40e/i40e.ko

version: 1.5.10-k

license: GPL

description: Intel(R) Ethernet Connection XL710 Network Driver

author: Intel Corporation, <</span>mailto:e1000-devel@lists.sourceforge.net e1000-devel@lists.sourceforge.net>

srcversion: B5DC8E286FEFB9414076D56

alias: pci:v00008086d00001588sv*sd*bc*sc*i*

alias: pci:v00008086d00001587sv*sd*bc*sc*i*

alias: pci:v00008086d000037D4sv*sd*bc*sc*i*

alias: pci:v00008086d000037D3sv*sd*bc*sc*i*

alias: pci:v00008086d000037D2sv*sd*bc*sc*i*

alias: pci:v00008086d000037D1sv*sd*bc*sc*i*

alias: pci:v00008086d000037D0sv*sd*bc*sc*i*

alias: pci:v00008086d000037CFsv*sd*bc*sc*i*

alias: pci:v00008086d000037CEsv*sd*bc*sc*i*

alias: pci:v00008086d00001587sv*sd*bc*sc*i*

alias: pci:v00008086d00001589sv*sd*bc*sc*i*

alias: pci:v00008086d00001586sv*sd*bc*sc*i*

alias: pci:v00008086d00001585sv*sd*bc*sc*i*

alias: pci:v00008086d00001584sv*sd*bc*sc*i*

alias: pci:v00008086d00001583sv*sd*bc*sc*i*

alias: pci:v00008086d00001581sv*sd*bc*sc*i*

alias: pci:v00008086d00001580sv*sd*bc*sc*i*

alias: pci:v00008086d00001574sv*sd*bc*sc*i*

alias: pci:v00008086d00001572sv*sd*bc*sc*i*

depends: ptp

vermagic: 2.6.32-696.3.2.el6.x86_64 SMP mod_unload modversions

parm: debug:Debug level (0=none,...,16=all) (int)

# grep i40e /tmp/dmesg-2.6.32-696.3.2.el6.x86_64

i40e: Intel(R) Ethernet Connection XL710 Network Driver - version 1.5.10-k

i40e: Copyright (c) 2013 - 2014 Intel Corporation.

i40e 0000:0b:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16

i40e 0000:0b:00.0: setting latency timer to 64

i40e 0000:0b:00.0: fw 4.50.37442 api 1.4 nvm 4.60 0x80001f47 1.3072.0

i40e 0000:0b:00.0: MAC address:

i40e 0000:0b:00.0: irq 85 for MSI/MSI-X

i40e 0000:0b:00.0: irq 86 for MSI/MSI-X

i40e 0000:0b:00.0: irq 87 for MSI/MSI-X

i40e 0000:0b:00.0: irq 88 for MSI/MSI-X

i40e 0000:0b:00.0: irq 89 for MSI/MSI-X

i40e 0000:0b:00.0: irq 90 for MSI/MSI-X

i40e 0000:0b:00.0: irq 91 for MSI/MSI-X

i40e 0000:0b:00.0: irq 92 for MSI/MSI-X

i40e 0000:0b:00.0: irq 93 for MSI/MSI-X

i40e 0000:0b:00.0: irq 94 for MSI/MSI-X

i40e 0000:0b:00.0: irq 95 for MSI/MSI-X

i40e 0000:0b:00.0: irq 96 for MSI/MSI-X

i40e 0000:0b:00.0: irq 97 for MSI/MSI-X

i40e 0000:0b:00.0: irq 98 for MSI/MSI-X

i40e 0000:0b:00.0: irq 99 for MSI/MSI-X

i40e 0000:0b:00.0: irq 100 for MSI/MSI-X

i40e 0000:0b:00.0: irq 101 for MSI/MSI-X

i40e 0000:0b:00.0: irq 102 for MSI/MSI-X

i40e 0000:0b:00.0: irq 103 for MSI/MSI-X

i40e 0000:0b:00.0: irq 104 for MSI/MSI-X

i40e 0000:0b:00.0: irq 105 for MSI/MSI-X

i40e 0000:0b:00.0: irq 106 for MSI/MSI-X

i40e 0000:0b:00.0: irq 107 for MSI/MSI-X

i40e 0000:0b:00.0: irq 108 for MSI/MSI-X

i40e 0000:0b:00.0: irq 109 for MSI/MSI-X

i40e 0000:0b:00.0: irq 110 for MSI/MSI-X

i40e 0000:0b:00.0: PCI-Express: Speed 8.0GT/s Width x8

i40e 0000:0b:00.0: Features: PF-id[0] VFs: 64 VSIs: 66 QP: 16 RX: 1BUF RSS FD_ATR FD_SB NTUPLE VxLAN PTP VEPA

i40e 0000:0b:00.1: PCI INT A -> GSI 16 (level, low) -> IRQ 16

i40e 0000:0b:00.1: setting latency timer to 64

i40e 0000:0b:00.1: fw 4.50.37442 api 1.4 nvm 4.60 0x80001f47 1.3072.0

i40e 0000:0b:00.1: MAC address:

i40e 0000:0b:00.1: irq 111 for MSI/MSI-X

i40e 0000:0b:00.1: irq 112 for MSI/MSI-X

i40e 0000:0b:00.1: irq 113 for MSI/MSI-X

i40e 0000:0b:00.1: irq 114 for MSI/MSI-X

i40e 0000:0b:00.1: irq 115 for MSI/MSI-X

i40e 0000:0b:00.1: irq 116 for MSI/MSI-X

i40e 0000:0b:00.1: irq 117 for MSI/MSI-X

i40e 0000:0b:00.1: irq 118 for MSI/MSI-X

i40e 0000:0b:00.1: irq 119 for MSI/MSI-X

i40e 0000:0b:00.1: irq 120 for MSI/MSI-X

i40e 0000:0b:00.1: irq 121 for MSI/MSI-X

i40e 0000:0b:00.1: irq 122 for MSI/MSI-X

i40e 0000:0b:00.1: irq 123 for MSI/MSI-X

i40e 0000:0b:00.1: irq 124 for MSI/MSI-X

i40e 0000:0b:00.1: irq 125 for MSI/MSI-X

i40e 0000:0b:00.1: irq 126 for MSI/MSI-X

i40e 0000:0b:00.1: irq 127 for MSI/MSI-X

i40e 0000:0b:00.1: irq 128 for MSI/MSI-X

i40e 0000:0b:00.1: irq 129 for MSI/MSI-X

i40e 0000:0b:00.1: irq 130 for MSI/MSI-X

i40e 0000:0b:00.1: irq 131 for MSI/MSI-X

i40e 0000:0b:00.1: irq 132 for MSI/MSI-X

i40e 0000:0b:00.1: irq 133 for MSI/MSI-X

i40e 0000:0b:00.1: irq 134 for MSI/MSI-X

i40e 0000:0b:00.1: irq 135 for MSI/MSI-X

i40e 0000:0b:00.1: irq 136 for MSI/MSI-X

i40e 0000:0b:00.1: PCI-Express: Speed 8.0GT/s Width x8

i40e 0000:0b:00.1: Features: PF-id[1] VFs: 64 VSIs: 66 QP: 16 RX: 1BUF RSS FD_ATR FD_SB NTUPLE VxLAN PTP VEPA

i40e 0000:0b:00.0: eth8: already using mac address

i40e 0000:0b:00.1: eth9: set new mac address

# ethtool -i bond2

driver: bonding

version: 3.7.1

firmware-version: 2

bus-info:

supports-statistics: no

supports-test: no

supports-eeprom-access: no

supports-register-dump: no

supports-priv-flags: no

# cat /proc/net/bonding/bond2

Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)

Primary Slave: None

Currently Active Slave: None

MII Status: down

MII Polling Interval (ms): 100

Up Delay (ms): 0

Down Delay (ms): 0

Slave Interface: eth9

MII Status: down

Speed: Unknown

Duplex: Unknown

Link Failure Count: 0

Permanent HW addr:

Slave queue ID: 0

Slave Interface: eth8

MII Status: down

Speed: Unknown

Duplex: Unkno...

0 Kudos
3 Replies
Highlighted
Community Manager
77 Views

Hi Alegutier,

 

 

Thank you for posting at Wired Communities. Can you share what is the exact XL710 network adapter model? is it an onboard NIC, if yes what is the brand and model of your system? Just to double check if you are using Centos here?

 

 

I will further check. Thank you.

 

 

regards,

 

 

sharon
0 Kudos
Highlighted
Beginner
77 Views

Thank you for your reply. Answers below:

NIC Model: HPE Ethernet 10Gb 2-port 562SFP+ Adapter PCI adapter

the nic was updated to the 1.1375.0 firmware, issue was present prior to firmware update.

OS: RHEL 6.9

Thank you

0 Kudos
Highlighted
Community Manager
77 Views

Hi Alegutier,

 

 

Thank you for the information. It is recommended for you to contact HP* support as this is an HP* network adapter which is not an Intel product. You may refer to the support information stated on page 6 of their Quick Spec at

 

https://www.hpe.com/h20195/v2/getpdf.aspx/c04605575.pdf?ver=2 (Please note this is third party website for your reference only, Intel does not have control over the content therein).

 

 

Hope this helps.

 

 

regards,

 

sharon

 

0 Kudos