Ethernet Products
Determine ramifications of Intel® Ethernet products and technologies
4808 Discussions

i40e XL710 QDA2 as iSCSI initiator results in "RX driver issue detected, PF reset issued", and iscsi ping timeouts

JErni
Beginner
3,879 Views

Hi!

There is a dual E5-2690v3 box based on Supermicro SYS-2028GR-TR/X10DRG-H, BIOS 1.0c, running Ubuntu 16.04.1, w. all current updates.

It has a XL710-QDA2 card, fw 5.0.40043 api 1.5 nvm 5.04 0x80002537, driver 1.5.25 (the stock Ubuntu i40e driver 1.4.25 resulted in a crash), that is planned to be used as an iSCSI initiator endpoint. But there seems to be a problem: the log file fills up with "RX driver issue detected" messages and occasionally the iSCSI link resets as ping times out. This is critical error, as the mounted device becomes unusable!

So, Question 1: Is there something that can be done to fix the iSCSI behaviour of the XL710 card? When testing the card with iperf (2 concurrent sessions, the other end had a 10G NIC), there were no problems. The problems started when the iSCSI connection was established.

Question 2: Is there a way to force the card to work in PCI Express 2.0 mode? The server downgraded the card once after several previous failures and then it became surprisingly stable. I cannot find a way to make it persist though.

Some excerpts from log files (there are also occasional TX driver issues, but much less frequently than RX problems):

[ 263.116057] EXT4-fs (sdk): mounted filesystem with ordered data mode. Opts: (null)

[ 321.030246] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

[ 332.512601] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

..lots of the above messages...

[ 481.001787] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

[ 487.183237] NOHZ: local_softirq_pending 08

[ 491.151322] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

..lots of the above messages...

[ 1181.099046] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

[ 1199.852665] connection1:0: ping timeout of 5 secs expired, recv timeout 5, last rx 4295189627, last ping 4295190878, now 4295192132

[ 1199.852694] connection1:0: detected conn error (1022)

[ 1320.412312] session1: session recovery timed out after 120 secs

[ 1320.412325] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412331] sd 10:0:0:0: [sdk] killing request

[ 1320.412347] sd 10:0:0:0: [sdk] FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK

[ 1320.412352] sd 10:0:0:0: [sdk] CDB: Write Same(10) 41 00 6b 40 69 00 00 08 00 00

[ 1320.412356] blk_update_request: I/O error, dev sdk, sector 1799383296

[ 1320.412411] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412423] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412428] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412433] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412438] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412442] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412446] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412451] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412455] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412460] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412464] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412469] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412473] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412477] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412482] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412486] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412555] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412566] Aborting journal on device sdk-8.

[ 1320.412571] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412576] JBD2: Error -5 detected when updating journal superblock for sdk-8.

[ 1332.831851] sd 10:0:0:0: rejecting I/O to offline device

[ 1332.831864] EXT4-fs error (device sdk): ext4_journal_check_start:56: Detected aborted journal

[ 1332.831869] EXT4-fs (sdk): Remounting filesystem read-only

[ 1332.831873] EXT4-fs (sdk): previous I/O error to superblock detected

Unloading the kernel module and modprobe-ing it again:

[ 1380.970732] i40e: Intel(R) 40-10 Gigabit Ethernet Connection Network Driver - version 1.5.25

[ 1380.970737] i40e: Copyright(c) 2013 - 2016 Intel Corporation.

[ 1380.987563] i40e 0000:81:00.0: fw 5.0.40043 api 1.5 nvm 5.04 0x80002537 0.0.0

[ 1381.127289] i40e 0000:81:00.0: MAC address: 3c:xx:xx:xx:xx:xx

[ 1381.246815] i40e 0000:81:00.0 p5p1: renamed from eth0

[ 1381.358723] i40e 0000:81:00.0 p5p1: NIC Link is Up 40 Gbps Full Duplex, Flow Control: None

[ 1381.416135] i40e 0000:81:00.0: PCI-Express: Speed 8.0GT/s Width x8

[ 1381.454729] i40e 0000:81:00.0: Features: PF-id[0] VFs: 64 VSIs: 66 QP: 48 RSS FD_ATR FD_SB NTUPLE CloudF DCB VxLAN Geneve NVGRE PTP VEPA

[ 1381.471584] i40e 0000:81:00.1: fw 5.0.40043 api 1.5 nvm 5.04 0x80002537 0.0.0

[ 1381.605866] i40e 0000:81:00.1: MAC address: 3c:xx:xx:xx:xx:xy

[ 1381.712287] i40e 0000:81:00.1 p5p2: renamed from eth0

[ 1381.751417] IPv6: ADDRCONF(NETDEV_UP): p5p2: link is not ready

[ 1381.810607] IPv6: ADDRCONF(NETDEV_UP): p5p2: link is not ready

[ 1381.820095] i40e 0000:81:00.1: PCI-Express: Speed 8.0GT/s Width x8

[ 1381.826141] i40e 0000:81:00.1: Features: PF-id[1] VFs: 64 VSIs: 66 QP: 48 RSS FD_ATR FD_SB NTUPLE CloudF DCB VxLAN Geneve NVGRE PTP VEPA

[ 1647.123056] EXT4-fs (sdk): recovery complete

[ 1647.123414] EXT4-fs (sdk): mounted filesystem with ordered data mode. Opts: (null)

[ 1668.179234] NOHZ: local_softirq_pending 08

[ 1673.994586] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

[ 1676.871805] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

[ 1692.833097] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

[ 1735.179086] NOHZ: local_softirq_pending 08

[ 1767.357902] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

[ 1803.828762] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

After several failures, the card loaded in PCI-Express 2.0 mode. It became stable then:

Jan 1 18:44:35 systemd[1]: Started ifup for p5p1.

Jan 1 18:44:35 systemd[1]: Found device Ethernet Controller XL710 for 40GbE QSFP+ (Ethernet Converged Network Adapter XL710-Q2).

Jan 1 18:44:35 NetworkManager[1911]: [1483289075.5028] devices added (path: /sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0/net/p5p1, iface: p5p1)

Jan 1 18:44:35 NetworkManager[1911]: [1483289075.5029] locking wired connection setting

Jan 1 18:44:35 NetworkManager[1911]: [1483289075.5029] get unmanaged devices count: 3

Jan 1 18:44:35 avahi-daemon[1741]: Joining mDNS multicast group on interface p5p1.IPv4 with address xx.xx.xx.xx.

Jan 1 18:44:35 avahi-daemon[1741]: New relevant interface p5p1.IPv4 for mDNS.

Jan 1 18:44:35 NetworkManager[1911]: [1483289075.5577] device (p5p1): link connected

Jan 1 18:44:35 avahi-daemon[1741]: Registering new address record for xx.xx.xx.xx on p5p1.IPv4.

Jan 1 18:44:35 kernel: [11572.541797] i40e 0000:81:00.0 p5p1: NIC Link is Up 40 Gbps Full Duplex, Flow Control: None

Jan 1 18:44:35 kernel: [11572.579303] i40e 0000:81:00.0: PCI-Express: Speed 5.0GT/s Width x8

Jan 1 18:44:35 kernel: [11572.579309] i40e 0000:81:00.0: PCI-Express bandwidth available for this device may be insufficient for optimal performance.

Jan 1 18:44:35 kernel: [11572.579312] i40e 0000:81:00.0: Please move the device to a different PCI-e link with more lanes and/or higher transfer rate.

Jan 1 18:44:35 kernel: [11...

0 Kudos
8 Replies
st4
New Contributor III
2,123 Views

Hi JPE,

Thank you for the post. The X710-QDA2 should be backward compatible when install on PCI Express 2.0 mode. I will further check for the question about the ISCSI behavior.

rgds,

wb

0 Kudos
JErni
Beginner
2,123 Views

Hi!

Is it possible to force the card to PCI Express 2.0 mode by i40e module parameters or some other way from the OS, or should it be done from BIOS?

Some more recent messages from dmesg (including an OOPS, that occurs after iscsid has been running for a few hours):

[17177.957714] connection1:0: detected conn error (1022)

[17193.493630] connection1:0: ping timeout of 5 secs expired, recv timeout 5, last rx 4299188053, last ping 4299189304, now 4299190556

[17193.493654] connection1:0: detected conn error (1022)

[17196.297655] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

[17209.959889] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

[17414.263227] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

[17420.231216] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

[17456.831086] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

[17475.067026] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

[17477.382925] i40e 0000:81:00.0: set phy mask fail, err I40E_ERR_ADMIN_QUEUE_TIMEOUT aq_err OK

[17478.411095] i40e 0000:81:00.0: couldn't get PF vsi config, err I40E_ERR_ADMIN_QUEUE_TIMEOUT aq_err OK

[17478.411107] i40e 0000:81:00.0: rebuild of veb_idx 0 owner VSI failed: -2

[17478.411114] i40e 0000:81:00.0: rebuild of switch failed: -2, will try to set up simple PF connection

[17478.923803] i40e 0000:81:00.0: couldn't get PF vsi config, err I40E_ERR_ADMIN_QUEUE_TIMEOUT aq_err OK

[17478.923813] i40e 0000:81:00.0: rebuild of Main VSI failed: -2

[17484.756674] connection1:0: ping timeout of 5 secs expired, recv timeout 5, last rx 4299260867, last ping 4299262118, now 4299263372

[17484.756704] connection1:0: detected conn error (1022)

[17605.028334] session1: session recovery timed out after 120 secs

[17605.028349] sd 10:0:0:0: rejecting I/O to offline device

[17605.028355] sd 10:0:0:0: [sdk] killing request

[17605.028371] sd 10:0:0:0: [sdk] FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK

[17605.028377] sd 10:0:0:0: [sdk] CDB: Write same(16) 93 00 00 00 00 02 f0 40 61 00 00 00 08 00 00 00

[17605.028381] blk_update_request: I/O error, dev sdk, sector 12620685568

[17605.028437] sd 10:0:0:0: rejecting I/O to offline device

[29413.061975] CPU: 31 PID: 15250 Comm: biosdevname Tainted: P OE 4.4.0-57-generic # 78-Ubuntu

[29413.063373] Hardware name: Supermicro SYS-2028GR-TR/X10DRG-H, BIOS 1.0c 05/20/2015

[29413.064784] task: ffff883f5356c600 ti: ffff881835d64000 task.ti: ffff881835d64000

[29413.066218] RIP: 0010:[] [] dev_get_stats+0x19/0x100

[29413.067682] RSP: 0018:ffff881835d67cc0 EFLAGS: 00010246

[29413.069142] RAX: 0000000000000000 RBX: ffff881835d67d48 RCX: 0000000000000001

[29413.070622] RDX: ffffffffc1485540 RSI: ffff881835d67d48 RDI: ffff887f6308b000

[29413.072097] RBP: ffff881835d67cd0 R08: 0000000000000056 R09: 00000000000001be

[29413.073583] R10: ffff883f64964000 R11: ffff883f649641bd R12: ffff887f6308b000

[29413.075083] R13: ffff887f63ce7b00 R14: ffff883f63acdb00 R15: ffff887f6308b000

[29413.076580] FS: 00007f552a2ad740(0000) GS:ffff883f7fcc0000(0000) knlGS:0000000000000000

[29413.078097] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033

[29413.079614] CR2: ffffffffc14855b8 CR3: 0000000aa3288000 CR4: 00000000001406e0

[29413.081144] Stack:

[29413.082658] ffff883f63acdb00 ffff887f6308b000 ffff881835d67e18 ffffffff817492d7

[29413.084219] 0000000000000000 0000000000000000 0000000000000000 0000000000000000

[29413.085788] 0000000000000000 0000000000183b17 0000000000002446 0000000000000000

[29413.087350] Call Trace:

[29413.088900] [] dev_seq_printf_stats+0x37/0x120

[29413.090464] [] dev_seq_show+0x14/0x30

[29413.092019] [] seq_read+0x2d6/0x390

[29413.093575] [] proc_reg_read+0x42/0x70

[29413.095126] [] __vfs_read+0x18/0x40

[29413.096687] [] vfs_read+0x86/0x130

[29413.098253] [] SyS_read+0x55/0xc0

[29413.099810] [] entry_SYSCALL_64_fastpath+0x16/0x71

[29413.101375] Code: ce 81 c1 b8 00 00 00 c1 e9 03 f3 48 a5 5d c3 0f 1f 00 0f 1f 44 00 00 55 48 89 e5 41 54 53 48 8b 97 00 02 00 00 49 89 fc 48 89 f3 <48> 83 7a 78 00 74 54 48 8d 7e 08 48 89 f1 31 c0 48 c7 06 00 00

[29413.104773] RIP [] dev_get_stats+0x19/0x100

[29413.106419] RSP

[29413.108029] CR2: ffffffffc14855b8

[29413.115107] ---[ end trace 984eff0723d78e6c ]---

[29413.221129] i40e 0000:81:00.0: PCI-Express: Speed 8.0GT/s Width x8

[29413.259233] i40e 0000:81:00.0: Features: PF-id[0] VFs: 64 VSIs: 66 QP: 48 RSS FD_ATR FD_SB NTUPLE CloudF DCB VxLAN Geneve NVGRE PTP VEPA

[29413.323260] i40e 0000:81:00.1: fw 5.0.40043 api 1.5 nvm 5.04 0x80002537 0.0.0

[29413.397529] i40e 0000:81:00.1: MAC address: 3c:xx:xx:xx:xx:xx

[29413.498143] BUG: unable to handle kernel paging request at ffffffffc14855b8

[29413.499594] IP: [] dev_get_stats+0x19/0x100

[29413.501035] PGD 2e0d067 PUD 2e0f067 PMD 7f5d253067 PTE 0

[29413.502396] Oops: 0000 [# 2] SMP

[29413.503696] Modules linked in: i40e(OE+) ipt_REJECT nf_reject_ipv4 mic(OE) xfrm_user xfrm_algo xt_addrtype xt_conntrack br_netfilter xt_multiport xt_CHECKSUM iptable_mangle xt_tcpudp ipt_MASQUERADE nf_nat_masquerade_ipv4 xt_comment iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack bridge stp llc iptable_filter ip_tables x_tables binfmt_misc nls_iso8859_1 intel_rapl x86_pkg_temp_thermal intel_powerclamp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel aes_x86_64 lrw gf128mul glue_helper ablk_helper cryptd input_leds joydev sb_edac edac_core mei_me lpc_ich mei wmi ioatdma shpchp ipmi_ssif ipmi_si ipmi_msghandler 8250_fintek acpi_power_meter acpi_pad mac_hid nvidia_uvm(POE) ib_iser rdma_cm iw_cm ib_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp

[29413.512128] nfsd libiscsi_tcp libiscsi scsi_transport_iscsi auth_rpcgss nfs_acl tmp401 lockd coretemp parport_pc grace ppdev sunrpc lp parport autofs4 btrfs xor raid6_pq nvidia_drm(POE) nvidia_modeset(POE) ast ttm drm_kms_helper nvidia(POE) syscopyarea sysfillrect sysimgblt fb_sys_fops vxlan ip6_udp_tunnel drm udp_tunnel igb dca ahci ptp libahci pps_core i2c_algo_bit fjes hid_generic usbhid hid [last unloaded: i40e]

[29413.517753] CPU: 35 PID: 15380 Comm: biosdevname Tainted: P DOE 4.4.0-57-generic # 78-Ubuntu

[29413.519128] Hardware name: Supermicro SYS-2028GR-TR/X10DRG-H, BIOS 1.0c 05/20/2015

[29413.520480] task: ffff883f640a0000 ti: ffff8803efc40000 task.ti: ffff8803efc40000

[29413.521828] RIP: 0010:[] [] dev_get_stats+0x19/0x100

[29413.523197] RSP: 0018:ffff8803efc43cc0 EFLAGS: 00010246

[29413.524526] RAX: 0000000000000000 RBX: ffff8803efc43d48 RCX: 0000000000000001

[29413.525865] RDX: ffffffffc1485540 RSI: ffff8803efc43d48 RDI: ffff887f6308b000

[29413.527182] RBP: ffff8803efc43cd0 R08: 0000000000000056 R09: 00000000000001be

[29413.528499] R10: ffff883f5674a000 R11: ffff883f5674a1bd R12: ffff887f6308b000

[29413.529808] R13: ffff887f5fe55600 R14: ffff883f46d08180 R15: ffff887f6308b000

[29413.531043] FS: 00007f8806a0b740(0000) GS:ffff883f7fdc0000(0000) knlGS:000000000000000...

0 Kudos
JErni
Beginner
2,123 Views

Hi!

The card worked for some hours in PCI Exress 2.0 mode under ~20% load (talking to a 10G target over a Summit 670G2 switch) doing a two-way copy on an iSCSI disk, and then gave a flood of the messages below.

The XL710 is not usable for iSCSI from my perspective. What I am doing wrong? Or is there some emergent interaction between i40e and the iscsi initiator modules (scsi_transport_iscsi, iscsi_tcp, libiscisi, libiscsi_tcp) modules? The kernel is currently Ubuntu 16.04-s 4.4.0-57-generic.

I removed the card from the server and installed an instance of 82599ES 10-Gigabit SFI/SFP+. Now performance is lower, but the system is stable, i.e. iSCSI is working.

Kind regards,

--

Juhan

Jan 2 19:12:10 kernel: [27240.521206] i40e 0000:81:00.0: ARQ Error: Unknown event 0x0000 received

Jan 2 19:12:10 kernel: [27240.521210] i40e 0000:81:00.0: ARQ Error: Unknown event 0x0000 received

Jan 2 19:12:10 kernel: [27240.521214] i40e 0000:81:00.0: ARQ Error: Unknown event 0x0000 received

Jan 2 19:12:10 kernel: [27240.521218] i40e 0000:81:00.0: ARQ Error: Unknown event 0x0000 received

Jan 2 19:12:10 kernel: [27240.521222] i40e 0000:81:00.0: ARQ Error: Unknown event 0x0000 received

Jan 2 19:12:10 kernel: [27240.521226] i40e 0000:81:00.0: ARQ Error: Unknown event 0x0000 received

Jan 2 19:12:10 kernel: [27240.521230] i40e 0000:81:00.0: ARQ Error: Unknown event 0x0000 received

...

0 Kudos
idata
Employee
2,123 Views

Hi JPE,

 

 

Can you try to disable LRO (Large Receive Offload)? Refer to the command below:

 

ethtool -K ethX lro off

 

X = XL710 adapter

 

 

Please feel free to update me.

 

 

Thanks,

 

wb

 

 

0 Kudos
idata
Employee
2,123 Views

Hi JPE,

 

 

Please feel free to update me the test result.

 

 

Thanks,

 

wb

 

0 Kudos
JErni
Beginner
2,123 Views

Hi!

Thanks for the suggestion! It seems the lro off is not yet a full solution to the problem. I've now updated the NVM to 5.05. The settings and statistics are as reported by ethtool below (the lro is now fixed to OFF in 5.05), accompanied with lspci and dmesg outputs.

The machine still drops the PCI Express speed to 5.0GT/s and issues a

[ 8075.145936] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

message. The layout of the machine is also included below (NVidia K40 is connected to the same CPU, but was not under load when the driver issue was detected).

When running iperf TCP benchmark, the machine drops the following messages:

[157378.969496] NOHZ: local_softirq_pending 08

I have an exactly same make and revision XL710-DA2 card with NVM update 5.05 attached to another server with dual Intel(R) Xeon(R) CPU E5-2630L v2 @ 2.40GHz CPUs via a 40 GB switch which has jumbo frames enabled. The other computer manages to maintain the PCI Express speed, but drops the occasional

[157555.720755] NOHZ: local_softirq_pending 08

messages during iperf tests.

What might cause the "RX driver issue detected, PF reset issued message"? Why does the card drop its PCI Express speed? I will go ahead and run another iSCSI test next week, but it would be great to have a set of tricks to try in case some problems occur. I will now be able to try a few things as currently the systems are in less use than in January.

Kind regards,

jpe

Machine 1 (dual E5-2690v3):

# ethtool -k p5p1

Features for p5p1:

rx-checksumming: on

tx-checksumming: on

tx-checksum-ipv4: on

tx-checksum-ip-generic: off [fixed]

tx-checksum-ipv6: on

tx-checksum-fcoe-crc: off [fixed]

tx-checksum-sctp: on

scatter-gather: on

tx-scatter-gather: on

tx-scatter-gather-fraglist: off [fixed]

tcp-segmentation-offload: on

tx-tcp-segmentation: on

tx-tcp-ecn-segmentation: on

tx-tcp6-segmentation: on

udp-fragmentation-offload: off [fixed]

generic-segmentation-offload: on

generic-receive-offload: on

large-receive-offload: off [fixed]

rx-vlan-offload: on

tx-vlan-offload: on

ntuple-filters: on

receive-hashing: on

highdma: on

rx-vlan-filter: on

vlan-challenged: off [fixed]

tx-lockless: off [fixed]

netns-local: off [fixed]

tx-gso-robust: off [fixed]

tx-fcoe-segmentation: off [fixed]

tx-gre-segmentation: on

tx-ipip-segmentation: off [fixed]

tx-sit-segmentation: off [fixed]

tx-udp_tnl-segmentation: on

fcoe-mtu: off [fixed]

tx-nocache-copy: off

loopback: off [fixed]

rx-fcs: off [fixed]

rx-all: off [fixed]

tx-vlan-stag-hw-insert: off [fixed]

rx-vlan-stag-hw-parse: off [fixed]

rx-vlan-stag-filter: off [fixed]

l2-fwd-offload: off [fixed]

busy-poll: off [fixed]

hw-tc-offload: off [fixed]

# ethtool -S p5p1

NIC statistics:

rx_packets: 135906582

tx_packets: 236775208

rx_bytes: 1086040889842

tx_bytes: 2035124104972

rx_errors: 0

tx_errors: 0

rx_dropped: 0

tx_dropped: 0

collisions: 0

rx_length_errors: 0

rx_crc_errors: 0

rx_unicast: 135906078

tx_unicast: 236775090

rx_multicast: 118

tx_multicast: 118

rx_broadcast: 386

tx_broadcast: 0

rx_unknown_protocol: 0

tx_linearize: 0

tx_force_wb: 0

tx_lost_interrupt: 1

rx_alloc_fail: 0

rx_pg_alloc_fail: 0

fcoe_bad_fccrc: 0

rx_fcoe_dropped: 0

rx_fcoe_packets: 0

rx_fcoe_dwords: 0

fcoe_ddp_count: 0

fcoe_last_error: 0

tx_fcoe_packets: 0

tx_fcoe_dwords: 0

tx-0.tx_packets: 154713

tx-0.tx_bytes: 10657790

rx-0.rx_packets: 2492215

rx-0.rx_bytes: 22435014050

tx-1.tx_packets: 817847

tx-1.tx_bytes: 56428762

rx-1.rx_packets: 13056966

rx-1.rx_bytes: 117518390624

tx-2.tx_packets: 315825

tx-2.tx_bytes: 21896326

rx-2.rx_packets: 4859855

rx-2.rx_bytes: 43745917117

tx-3.tx_packets: 891321

tx-3.tx_bytes: 61440911

rx-3.rx_packets: 14258155

rx-3.rx_bytes: 128314969814

tx-4.tx_packets: 537998

tx-4.tx_bytes: 37296225

rx-4.rx_packets: 8434950

rx-4.rx_bytes: 75941005620

tx-5.tx_packets: 1114127

tx-5.tx_bytes: 77321742

rx-5.rx_packets: 17302666

rx-5.rx_bytes: 155777322356

tx-6.tx_packets: 303480

tx-6.tx_bytes: 21046985

rx-6.rx_packets: 4733870

rx-6.rx_bytes: 42627872440

tx-7.tx_packets: 231648

tx-7.tx_bytes: 15894117

rx-7.rx_packets: 3787521

rx-7.rx_bytes: 34083063902

tx-8.tx_packets: 25323

tx-8.tx_bytes: 1748876

rx-8.rx_packets: 402983

rx-8.rx_bytes: 3627610198

tx-9.tx_packets: 552077

tx-9.tx_bytes: 38129770

rx-9.rx_packets: 8782639

rx-9.rx_bytes: 79072508808

tx-10.tx_packets: 502443

tx-10.tx_bytes: 34558724

rx-10.rx_packets: 8123090

rx-10.rx_bytes: 73076190236

tx-11.tx_packets: 774191

tx-11.tx_bytes: 53563104

rx-11.rx_packets: 12196082

rx-11.rx_bytes: 109795466384

tx-12.tx_packets: 6254

tx-12.tx_bytes: 438620

rx-12.rx_packets: 98070

rx-12.rx_bytes: 883451748

tx-13.tx_packets: 9

tx-13.tx_bytes: 378

rx-13.rx_packets: 412

rx-13.rx_bytes: 24720

tx-14.tx_packets: 0

tx-14.tx_bytes: 0

rx-14.rx_packets: 0

rx-14.rx_bytes: 0

tx-15.tx_packets: 195116688

tx-15.tx_bytes: 1758770157440

rx-15.rx_packets: 12424945

rx-15.rx_bytes: 820062822

tx-16.tx_packets: 29386293

tx-16.tx_bytes: 250603243370

rx-16.rx_packets: 1726683

rx-16.rx_bytes: 113961282

tx-17.tx_packets: 0

tx-17.tx_bytes: 0

rx-17.rx_packets: 0

rx-17.rx_bytes: 0

tx-18.tx_packets: 0

tx-18.tx_bytes: 0

rx-18.rx_packets: 0

rx-18.rx_bytes: 0

tx-19.tx_packets: 200620

tx-19.tx_bytes: 14138912

rx-19.rx_packets: 2849304

rx-19.rx_bytes...

0 Kudos
idata
Employee
2,123 Views

Hi JPE,

 

 

Thank you very much for the update. Just want to clarify as you mentioned below you have another same make of XL710-DA2

 

" I have an exactly same make and revision XL710-DA2 card with NVM update 5.05 attached to another server with dual Intel(R) Xeon(R) CPU E5-2630L v2 @ 2.40GHz CPUs via a 40 GB switch which has jumbo frames enabled. The other computer manages to maintain the PCI Express speed, but drops the occasional

 

 

[157555.720755] NOHZ: local_softirq_pending 08"

 

 

Have you compare the configuration of these two NICs and is it possible to configure the one with the "RX driver issue detected, PF reset issued message" with the same setting as the other one?

 

 

Thanks,

 

sharon

 

0 Kudos
idata
Employee
2,123 Views

Hi JPE,

 

 

Please feel free to provide the information.

 

 

regards,

 

sharon

 

0 Kudos
Reply