<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: i40e XL710 QDA2 as iSCSI initiator results in &amp;quot;RX driver issue detected, PF reset issued&amp;quot;, and iscsi ping timeouts in Ethernet Products</title>
    <link>https://community.intel.com/t5/Ethernet-Products/i40e-XL710-QDA2-as-iSCSI-initiator-results-in-quot-RX-driver/m-p/495648#M9506</link>
    <description>&lt;P&gt;Hi JPE,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt; Please feel free to update me the test result.&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;Thanks,&lt;P&gt;&amp;nbsp;&lt;/P&gt;wb&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Thu, 19 Jan 2017 05:48:44 GMT</pubDate>
    <dc:creator>idata</dc:creator>
    <dc:date>2017-01-19T05:48:44Z</dc:date>
    <item>
      <title>i40e XL710 QDA2 as iSCSI initiator results in "RX driver issue detected, PF reset issued", and iscsi ping timeouts</title>
      <link>https://community.intel.com/t5/Ethernet-Products/i40e-XL710-QDA2-as-iSCSI-initiator-results-in-quot-RX-driver/m-p/495643#M9501</link>
      <description>&lt;P&gt;Hi!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;There is a dual E5-2690v3 box based on  Supermicro SYS-2028GR-TR/X10DRG-H, BIOS 1.0c, running Ubuntu 16.04.1, w. all current updates.&lt;/P&gt;&lt;P&gt;It has a XL710-QDA2 card, fw 5.0.40043 api 1.5 nvm 5.04 0x80002537, driver 1.5.25 (the stock Ubuntu i40e driver 1.4.25 resulted in a crash), that is planned to be used as an iSCSI initiator endpoint. But there seems to be a problem: the log file fills up with "RX driver issue detected" messages and occasionally the iSCSI link resets as ping times out. This is critical error, as the mounted device becomes unusable!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;So, Question 1: Is there something that can be done to fix the iSCSI behaviour of the XL710 card? When testing the card with iperf (2 concurrent sessions, the other end had a 10G NIC), there were no problems. The problems started when the iSCSI connection was established.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Question 2: Is there a way to force the card to work in PCI Express 2.0 mode? The server downgraded the card once after several previous failures and then it became surprisingly stable. I cannot find a way to make it persist though.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Some excerpts from log files (there are also occasional TX driver issues, but much less frequently than RX problems):&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;[  263.116057] EXT4-fs (sdk): mounted filesystem with ordered data mode. Opts: (null)&lt;/P&gt;&lt;P&gt;[  321.030246] i40e 0000:81:00.0: RX driver issue detected, PF reset issued&lt;/P&gt;&lt;P&gt;[  332.512601] i40e 0000:81:00.0: RX driver issue detected, PF reset issued&lt;/P&gt;&lt;P&gt;..lots of the above messages...&lt;/P&gt;&lt;P&gt;[  481.001787] i40e 0000:81:00.0: RX driver issue detected, PF reset issued&lt;/P&gt;&lt;P&gt;[  487.183237] NOHZ: local_softirq_pending 08&lt;/P&gt;&lt;P&gt;[  491.151322] i40e 0000:81:00.0: RX driver issue detected, PF reset issued&lt;/P&gt;&lt;P&gt;..lots of the above messages...&lt;/P&gt;&lt;P&gt;[ 1181.099046] i40e 0000:81:00.0: RX driver issue detected, PF reset issued&lt;/P&gt;&lt;P&gt;[ 1199.852665]  connection1:0: ping timeout of 5 secs expired, recv timeout 5, last rx 4295189627, last ping 4295190878, now 4295192132&lt;/P&gt;&lt;P&gt;[ 1199.852694]  connection1:0: detected conn error (1022)&lt;/P&gt;&lt;P&gt;[ 1320.412312]  session1: session recovery timed out after 120 secs&lt;/P&gt;&lt;P&gt;[ 1320.412325] sd 10:0:0:0: rejecting I/O to offline device&lt;/P&gt;&lt;P&gt;[ 1320.412331] sd 10:0:0:0: [sdk] killing request&lt;/P&gt;&lt;P&gt;[ 1320.412347] sd 10:0:0:0: [sdk] FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK&lt;/P&gt;&lt;P&gt;[ 1320.412352] sd 10:0:0:0: [sdk] CDB: Write Same(10) 41 00 6b 40 69 00 00 08 00 00&lt;/P&gt;&lt;P&gt;[ 1320.412356] blk_update_request: I/O error, dev sdk, sector 1799383296&lt;/P&gt;&lt;P&gt;[ 1320.412411] sd 10:0:0:0: rejecting I/O to offline device&lt;/P&gt;&lt;P&gt;[ 1320.412423] sd 10:0:0:0: rejecting I/O to offline device&lt;/P&gt;&lt;P&gt;[ 1320.412428] sd 10:0:0:0: rejecting I/O to offline device&lt;/P&gt;&lt;P&gt;[ 1320.412433] sd 10:0:0:0: rejecting I/O to offline device&lt;/P&gt;&lt;P&gt;[ 1320.412438] sd 10:0:0:0: rejecting I/O to offline device&lt;/P&gt;&lt;P&gt;[ 1320.412442] sd 10:0:0:0: rejecting I/O to offline device&lt;/P&gt;&lt;P&gt;[ 1320.412446] sd 10:0:0:0: rejecting I/O to offline device&lt;/P&gt;&lt;P&gt;[ 1320.412451] sd 10:0:0:0: rejecting I/O to offline device&lt;/P&gt;&lt;P&gt;[ 1320.412455] sd 10:0:0:0: rejecting I/O to offline device&lt;/P&gt;&lt;P&gt;[ 1320.412460] sd 10:0:0:0: rejecting I/O to offline device&lt;/P&gt;&lt;P&gt;[ 1320.412464] sd 10:0:0:0: rejecting I/O to offline device&lt;/P&gt;&lt;P&gt;[ 1320.412469] sd 10:0:0:0: rejecting I/O to offline device&lt;/P&gt;&lt;P&gt;[ 1320.412473] sd 10:0:0:0: rejecting I/O to offline device&lt;/P&gt;&lt;P&gt;[ 1320.412477] sd 10:0:0:0: rejecting I/O to offline device&lt;/P&gt;&lt;P&gt;[ 1320.412482] sd 10:0:0:0: rejecting I/O to offline device&lt;/P&gt;&lt;P&gt;[ 1320.412486] sd 10:0:0:0: rejecting I/O to offline device&lt;/P&gt;&lt;P&gt;[ 1320.412555] sd 10:0:0:0: rejecting I/O to offline device&lt;/P&gt;&lt;P&gt;[ 1320.412566] Aborting journal on device sdk-8.&lt;/P&gt;&lt;P&gt;[ 1320.412571] sd 10:0:0:0: rejecting I/O to offline device&lt;/P&gt;&lt;P&gt;[ 1320.412576] JBD2: Error -5 detected when updating journal superblock for sdk-8.&lt;/P&gt;&lt;P&gt;[ 1332.831851] sd 10:0:0:0: rejecting I/O to offline device&lt;/P&gt;&lt;P&gt;[ 1332.831864] EXT4-fs error (device sdk): ext4_journal_check_start:56: Detected aborted journal&lt;/P&gt;&lt;P&gt;[ 1332.831869] EXT4-fs (sdk): Remounting filesystem read-only&lt;/P&gt;&lt;P&gt;[ 1332.831873] EXT4-fs (sdk): previous I/O error to superblock detected&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Unloading the kernel module and modprobe-ing it again:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;[ 1380.970732] i40e: Intel(R) 40-10 Gigabit Ethernet Connection Network Driver - version 1.5.25&lt;/P&gt;&lt;P&gt;[ 1380.970737] i40e: Copyright(c) 2013 - 2016 Intel Corporation.&lt;/P&gt;&lt;P&gt;[ 1380.987563] i40e 0000:81:00.0: fw 5.0.40043 api 1.5 nvm 5.04 0x80002537 0.0.0&lt;/P&gt;&lt;P&gt;[ 1381.127289] i40e 0000:81:00.0: MAC address: 3c:xx:xx:xx:xx:xx&lt;/P&gt;&lt;P&gt;[ 1381.246815] i40e 0000:81:00.0 p5p1: renamed from eth0&lt;/P&gt;&lt;P&gt;[ 1381.358723] i40e 0000:81:00.0 p5p1: NIC Link is Up 40 Gbps Full Duplex, Flow Control: None&lt;/P&gt;&lt;P&gt;[ 1381.416135] i40e 0000:81:00.0: PCI-Express: Speed 8.0GT/s Width x8&lt;/P&gt;&lt;P&gt;[ 1381.454729] i40e 0000:81:00.0: Features: PF-id[0] VFs: 64 VSIs: 66 QP: 48 RSS FD_ATR FD_SB NTUPLE CloudF DCB VxLAN Geneve NVGRE PTP VEPA&lt;/P&gt;&lt;P&gt;[ 1381.471584] i40e 0000:81:00.1: fw 5.0.40043 api 1.5 nvm 5.04 0x80002537 0.0.0&lt;/P&gt;&lt;P&gt;[ 1381.605866] i40e 0000:81:00.1: MAC address: 3c:xx:xx:xx:xx:xy&lt;/P&gt;&lt;P&gt;[ 1381.712287] i40e 0000:81:00.1 p5p2: renamed from eth0&lt;/P&gt;&lt;P&gt;[ 1381.751417] IPv6: ADDRCONF(NETDEV_UP): p5p2: link is not ready&lt;/P&gt;&lt;P&gt;[ 1381.810607] IPv6: ADDRCONF(NETDEV_UP): p5p2: link is not ready&lt;/P&gt;&lt;P&gt;[ 1381.820095] i40e 0000:81:00.1: PCI-Express: Speed 8.0GT/s Width x8&lt;/P&gt;&lt;P&gt;[ 1381.826141] i40e 0000:81:00.1: Features: PF-id[1] VFs: 64 VSIs: 66 QP: 48 RSS FD_ATR FD_SB NTUPLE CloudF DCB VxLAN Geneve NVGRE PTP VEPA&lt;/P&gt;&lt;P&gt;[ 1647.123056] EXT4-fs (sdk): recovery complete&lt;/P&gt;&lt;P&gt;[ 1647.123414] EXT4-fs (sdk): mounted filesystem with ordered data mode. Opts: (null)&lt;/P&gt;&lt;P&gt;[ 1668.179234] NOHZ: local_softirq_pending 08&lt;/P&gt;&lt;P&gt;[ 1673.994586] i40e 0000:81:00.0: RX driver issue detected, PF reset issued&lt;/P&gt;&lt;P&gt;[ 1676.871805] i40e 0000:81:00.0: RX driver issue detected, PF reset issued&lt;/P&gt;&lt;P&gt;[ 1692.833097] i40e 0000:81:00.0: RX driver issue detected, PF reset issued&lt;/P&gt;&lt;P&gt;[ 1735.179086] NOHZ: local_softirq_pending 08&lt;/P&gt;&lt;P&gt;[ 1767.357902] i40e 0000:81:00.0: RX driver issue detected, PF reset issued&lt;/P&gt;&lt;P&gt;[ 1803.828762] i40e 0000:81:00.0: RX driver issue detected, PF reset issued&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;After several failures, the card loaded in PCI-Express 2.0 mode. It became stable then:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Jan  1 18:44:35  systemd[1]: Started ifup for p5p1.&lt;/P&gt;&lt;P&gt;Jan  1 18:44:35  systemd[1]: Found device Ethernet Controller XL710 for 40GbE QSFP+ (Ethernet Converged Network Adapter XL710-Q2).&lt;/P&gt;&lt;P&gt;Jan  1 18:44:35  NetworkManager[1911]:   [1483289075.5028] devices added (path: /sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0/net/p5p1, iface: p5p1)&lt;/P&gt;&lt;P&gt;Jan  1 18:44:35  NetworkManager[1911]:   [1483289075.5029] locking wired connection setting&lt;/P&gt;&lt;P&gt;Jan  1 18:44:35  NetworkManager[1911]:   [1483289075.5029] get unmanaged devices count: 3&lt;/P&gt;&lt;P&gt;Jan  1 18:44:35  avahi-daemon[1741]: Joining mDNS multicast group on interface p5p1.IPv4 with address xx.xx.xx.xx.&lt;/P&gt;&lt;P&gt;Jan  1 18:44:35  avahi-daemon[1741]: New relevant interface p5p1.IPv4 for mDNS.&lt;/P&gt;&lt;P&gt;Jan  1 18:44:35  NetworkManager[1911]:   [1483289075.5577] device (p5p1): link connected&lt;/P&gt;&lt;P&gt;Jan  1 18:44:35  avahi-daemon[1741]: Registering new address record for xx.xx.xx.xx on p5p1.IPv4.&lt;/P&gt;&lt;P&gt;Jan  1 18:44:35  kernel: [11572.541797] i40e 0000:81:00.0 p5p1: NIC Link is Up 40 Gbps Full Duplex, Flow Control: None&lt;/P&gt;&lt;P&gt;Jan  1 18:44:35  kernel: [11572.579303] i40e 0000:81:00.0: PCI-Express: Speed 5.0GT/s Width x8&lt;/P&gt;&lt;P&gt;Jan  1 18:44:35  kernel: [11572.579309] i40e 0000:81:00.0: PCI-Express bandwidth available for this device may be insufficient for optimal performance.&lt;/P&gt;&lt;P&gt;Jan  1 18:44:35  kernel: [11572.579312] i40e 0000:81:00.0: Please move the device to a different PCI-e link with more lanes and/or higher transfer rate.&lt;/P&gt;&lt;P&gt;Jan  1 18:44:35  kernel: [11...&lt;/P&gt;</description>
      <pubDate>Mon, 02 Jan 2017 03:34:37 GMT</pubDate>
      <guid>https://community.intel.com/t5/Ethernet-Products/i40e-XL710-QDA2-as-iSCSI-initiator-results-in-quot-RX-driver/m-p/495643#M9501</guid>
      <dc:creator>JErni</dc:creator>
      <dc:date>2017-01-02T03:34:37Z</dc:date>
    </item>
    <item>
      <title>Re: i40e XL710 QDA2 as iSCSI initiator results in "RX driver issue detected, PF reset issued", and iscsi ping timeouts</title>
      <link>https://community.intel.com/t5/Ethernet-Products/i40e-XL710-QDA2-as-iSCSI-initiator-results-in-quot-RX-driver/m-p/495644#M9502</link>
      <description>&lt;P&gt;Hi JPE,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thank you for the post. The X710-QDA2 should be backward compatible when install on PCI Express 2.0 mode.  I will further check for the question about the ISCSI behavior.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;rgds,&lt;/P&gt;&lt;P&gt;wb&lt;/P&gt;</description>
      <pubDate>Mon, 02 Jan 2017 06:04:49 GMT</pubDate>
      <guid>https://community.intel.com/t5/Ethernet-Products/i40e-XL710-QDA2-as-iSCSI-initiator-results-in-quot-RX-driver/m-p/495644#M9502</guid>
      <dc:creator>st4</dc:creator>
      <dc:date>2017-01-02T06:04:49Z</dc:date>
    </item>
    <item>
      <title>Re: i40e XL710 QDA2 as iSCSI initiator results in "RX driver issue detected, PF reset issued", and iscsi ping timeouts</title>
      <link>https://community.intel.com/t5/Ethernet-Products/i40e-XL710-QDA2-as-iSCSI-initiator-results-in-quot-RX-driver/m-p/495645#M9503</link>
      <description>&lt;P&gt;Hi!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Is it possible to force the card to PCI Express 2.0 mode by i40e module parameters or some other way from the OS, or should it be done from BIOS?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Some more recent messages from dmesg (including an OOPS, that occurs after iscsid has been running for a few hours):&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;[17177.957714]  connection1:0: detected conn error (1022)&lt;/P&gt;&lt;P&gt;[17193.493630]  connection1:0: ping timeout of 5 secs expired, recv timeout 5, last rx 4299188053, last ping 4299189304, now 4299190556&lt;/P&gt;&lt;P&gt;[17193.493654]  connection1:0: detected conn error (1022)&lt;/P&gt;&lt;P&gt;[17196.297655] i40e 0000:81:00.0: RX driver issue detected, PF reset issued&lt;/P&gt;&lt;P&gt;[17209.959889] i40e 0000:81:00.0: RX driver issue detected, PF reset issued&lt;/P&gt;&lt;P&gt;[17414.263227] i40e 0000:81:00.0: RX driver issue detected, PF reset issued&lt;/P&gt;&lt;P&gt;[17420.231216] i40e 0000:81:00.0: RX driver issue detected, PF reset issued&lt;/P&gt;&lt;P&gt;[17456.831086] i40e 0000:81:00.0: RX driver issue detected, PF reset issued&lt;/P&gt;&lt;P&gt;[17475.067026] i40e 0000:81:00.0: RX driver issue detected, PF reset issued&lt;/P&gt;&lt;P&gt;[17477.382925] i40e 0000:81:00.0: set phy mask fail, err I40E_ERR_ADMIN_QUEUE_TIMEOUT aq_err OK&lt;/P&gt;&lt;P&gt;[17478.411095] i40e 0000:81:00.0: couldn't get PF vsi config, err I40E_ERR_ADMIN_QUEUE_TIMEOUT aq_err OK&lt;/P&gt;&lt;P&gt;[17478.411107] i40e 0000:81:00.0: rebuild of veb_idx 0 owner VSI failed: -2&lt;/P&gt;&lt;P&gt;[17478.411114] i40e 0000:81:00.0: rebuild of switch failed: -2, will try to set up simple PF connection&lt;/P&gt;&lt;P&gt;[17478.923803] i40e 0000:81:00.0: couldn't get PF vsi config, err I40E_ERR_ADMIN_QUEUE_TIMEOUT aq_err OK&lt;/P&gt;&lt;P&gt;[17478.923813] i40e 0000:81:00.0: rebuild of Main VSI failed: -2&lt;/P&gt;&lt;P&gt;[17484.756674]  connection1:0: ping timeout of 5 secs expired, recv timeout 5, last rx 4299260867, last ping 4299262118, now 4299263372&lt;/P&gt;&lt;P&gt;[17484.756704]  connection1:0: detected conn error (1022)&lt;/P&gt;&lt;P&gt;[17605.028334]  session1: session recovery timed out after 120 secs&lt;/P&gt;&lt;P&gt;[17605.028349] sd 10:0:0:0: rejecting I/O to offline device&lt;/P&gt;&lt;P&gt;[17605.028355] sd 10:0:0:0: [sdk] killing request&lt;/P&gt;&lt;P&gt;[17605.028371] sd 10:0:0:0: [sdk] FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK&lt;/P&gt;&lt;P&gt;[17605.028377] sd 10:0:0:0: [sdk] CDB: Write same(16) 93 00 00 00 00 02 f0 40 61 00 00 00 08 00 00 00&lt;/P&gt;&lt;P&gt;[17605.028381] blk_update_request: I/O error, dev sdk, sector 12620685568&lt;/P&gt;&lt;P&gt;[17605.028437] sd 10:0:0:0: rejecting I/O to offline device&lt;/P&gt;[29413.061975] CPU: 31 PID: 15250 Comm: biosdevname Tainted: P       OE   4.4.0-57-generic # 78-Ubuntu&lt;P&gt;[29413.063373] Hardware name: Supermicro SYS-2028GR-TR/X10DRG-H, BIOS 1.0c 05/20/2015&lt;/P&gt;&lt;P&gt;[29413.064784] task: ffff883f5356c600 ti: ffff881835d64000 task.ti: ffff881835d64000&lt;/P&gt;&lt;P&gt;[29413.066218] RIP: 0010:[]  [] dev_get_stats+0x19/0x100&lt;/P&gt;&lt;P&gt;[29413.067682] RSP: 0018:ffff881835d67cc0  EFLAGS: 00010246&lt;/P&gt;&lt;P&gt;[29413.069142] RAX: 0000000000000000 RBX: ffff881835d67d48 RCX: 0000000000000001&lt;/P&gt;&lt;P&gt;[29413.070622] RDX: ffffffffc1485540 RSI: ffff881835d67d48 RDI: ffff887f6308b000&lt;/P&gt;&lt;P&gt;[29413.072097] RBP: ffff881835d67cd0 R08: 0000000000000056 R09: 00000000000001be&lt;/P&gt;&lt;P&gt;[29413.073583] R10: ffff883f64964000 R11: ffff883f649641bd R12: ffff887f6308b000&lt;/P&gt;&lt;P&gt;[29413.075083] R13: ffff887f63ce7b00 R14: ffff883f63acdb00 R15: ffff887f6308b000&lt;/P&gt;&lt;P&gt;[29413.076580] FS:  00007f552a2ad740(0000) GS:ffff883f7fcc0000(0000) knlGS:0000000000000000&lt;/P&gt;&lt;P&gt;[29413.078097] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033&lt;/P&gt;&lt;P&gt;[29413.079614] CR2: ffffffffc14855b8 CR3: 0000000aa3288000 CR4: 00000000001406e0&lt;/P&gt;&lt;P&gt;[29413.081144] Stack:&lt;/P&gt;&lt;P&gt;[29413.082658]  ffff883f63acdb00 ffff887f6308b000 ffff881835d67e18 ffffffff817492d7&lt;/P&gt;&lt;P&gt;[29413.084219]  0000000000000000 0000000000000000 0000000000000000 0000000000000000&lt;/P&gt;&lt;P&gt;[29413.085788]  0000000000000000 0000000000183b17 0000000000002446 0000000000000000&lt;/P&gt;&lt;P&gt;[29413.087350] Call Trace:&lt;/P&gt;&lt;P&gt;[29413.088900]  [] dev_seq_printf_stats+0x37/0x120&lt;/P&gt;&lt;P&gt;[29413.090464]  [] dev_seq_show+0x14/0x30&lt;/P&gt;&lt;P&gt;[29413.092019]  [] seq_read+0x2d6/0x390&lt;/P&gt;&lt;P&gt;[29413.093575]  [] proc_reg_read+0x42/0x70&lt;/P&gt;&lt;P&gt;[29413.095126]  [] __vfs_read+0x18/0x40&lt;/P&gt;&lt;P&gt;[29413.096687]  [] vfs_read+0x86/0x130&lt;/P&gt;&lt;P&gt;[29413.098253]  [] SyS_read+0x55/0xc0&lt;/P&gt;&lt;P&gt;[29413.099810]  [] entry_SYSCALL_64_fastpath+0x16/0x71&lt;/P&gt;&lt;P&gt;[29413.101375] Code: ce 81 c1 b8 00 00 00 c1 e9 03 f3 48 a5 5d c3 0f 1f 00 0f 1f 44 00 00 55 48 89 e5 41 54 53 48 8b 97 00 02 00 00 49 89 fc 48 89 f3 &amp;lt;48&amp;gt; 83 7a 78 00 74 54 48 8d 7e 08 48 89 f1 31 c0 48 c7 06 00 00&lt;/P&gt;&lt;P&gt;[29413.104773] RIP  [] dev_get_stats+0x19/0x100&lt;/P&gt;&lt;P&gt;[29413.106419]  RSP &lt;/P&gt;&lt;P&gt;[29413.108029] CR2: ffffffffc14855b8&lt;/P&gt;&lt;P&gt;[29413.115107] ---[ end trace 984eff0723d78e6c ]---&lt;/P&gt;&lt;P&gt;[29413.221129] i40e 0000:81:00.0: PCI-Express: Speed 8.0GT/s Width x8&lt;/P&gt;&lt;P&gt;[29413.259233] i40e 0000:81:00.0: Features: PF-id[0] VFs: 64 VSIs: 66 QP: 48 RSS FD_ATR FD_SB NTUPLE CloudF DCB VxLAN Geneve NVGRE PTP VEPA&lt;/P&gt;&lt;P&gt;[29413.323260] i40e 0000:81:00.1: fw 5.0.40043 api 1.5 nvm 5.04 0x80002537 0.0.0&lt;/P&gt;&lt;P&gt;[29413.397529] i40e 0000:81:00.1: MAC address: 3c:xx:xx:xx:xx:xx&lt;/P&gt;&lt;P&gt;[29413.498143] BUG: unable to handle kernel paging request at ffffffffc14855b8&lt;/P&gt;&lt;P&gt;[29413.499594] IP: [] dev_get_stats+0x19/0x100&lt;/P&gt;&lt;P&gt;[29413.501035] PGD 2e0d067 PUD 2e0f067 PMD 7f5d253067 PTE 0&lt;/P&gt;&lt;P&gt;[29413.502396] Oops: 0000 [# 2] SMP&lt;/P&gt;&lt;P&gt;[29413.503696] Modules linked in: i40e(OE+) ipt_REJECT nf_reject_ipv4 mic(OE) xfrm_user xfrm_algo xt_addrtype xt_conntrack br_netfilter xt_multiport xt_CHECKSUM iptable_mangle xt_tcpudp ipt_MASQUERADE nf_nat_masquerade_ipv4 xt_comment iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack bridge stp llc iptable_filter ip_tables x_tables binfmt_misc nls_iso8859_1 intel_rapl x86_pkg_temp_thermal intel_powerclamp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel aes_x86_64 lrw gf128mul glue_helper ablk_helper cryptd input_leds joydev sb_edac edac_core mei_me lpc_ich mei wmi ioatdma shpchp ipmi_ssif ipmi_si ipmi_msghandler 8250_fintek acpi_power_meter acpi_pad mac_hid nvidia_uvm(POE) ib_iser rdma_cm iw_cm ib_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp&lt;/P&gt;&lt;P&gt;[29413.512128]  nfsd libiscsi_tcp libiscsi scsi_transport_iscsi auth_rpcgss nfs_acl tmp401 lockd coretemp parport_pc grace ppdev sunrpc lp parport autofs4 btrfs xor raid6_pq nvidia_drm(POE) nvidia_modeset(POE) ast ttm drm_kms_helper nvidia(POE) syscopyarea sysfillrect sysimgblt fb_sys_fops vxlan ip6_udp_tunnel drm udp_tunnel igb dca ahci ptp libahci pps_core i2c_algo_bit fjes hid_generic usbhid hid [last unloaded: i40e]&lt;/P&gt;[29413.517753] CPU: 35 PID: 15380 Comm: biosdevname Tainted: P  DOE   4.4.0-57-generic # 78-Ubuntu&lt;P&gt;[29413.519128] Hardware name: Supermicro SYS-2028GR-TR/X10DRG-H, BIOS 1.0c 05/20/2015&lt;/P&gt;&lt;P&gt;[29413.520480] task: ffff883f640a0000 ti: ffff8803efc40000 task.ti: ffff8803efc40000&lt;/P&gt;&lt;P&gt;[29413.521828] RIP: 0010:[]  [] dev_get_stats+0x19/0x100&lt;/P&gt;&lt;P&gt;[29413.523197] RSP: 0018:ffff8803efc43cc0  EFLAGS: 00010246&lt;/P&gt;&lt;P&gt;[29413.524526] RAX: 0000000000000000 RBX: ffff8803efc43d48 RCX: 0000000000000001&lt;/P&gt;&lt;P&gt;[29413.525865] RDX: ffffffffc1485540 RSI: ffff8803efc43d48 RDI: ffff887f6308b000&lt;/P&gt;&lt;P&gt;[29413.527182] RBP: ffff8803efc43cd0 R08: 0000000000000056 R09: 00000000000001be&lt;/P&gt;&lt;P&gt;[29413.528499] R10: ffff883f5674a000 R11: ffff883f5674a1bd R12: ffff887f6308b000&lt;/P&gt;&lt;P&gt;[29413.529808] R13: ffff887f5fe55600 R14: ffff883f46d08180 R15: ffff887f6308b000&lt;/P&gt;&lt;P&gt;[29413.531043] FS:  00007f8806a0b740(0000) GS:ffff883f7fdc0000(0000) knlGS:000000000000000...&lt;/P&gt;</description>
      <pubDate>Mon, 02 Jan 2017 09:52:11 GMT</pubDate>
      <guid>https://community.intel.com/t5/Ethernet-Products/i40e-XL710-QDA2-as-iSCSI-initiator-results-in-quot-RX-driver/m-p/495645#M9503</guid>
      <dc:creator>JErni</dc:creator>
      <dc:date>2017-01-02T09:52:11Z</dc:date>
    </item>
    <item>
      <title>Re: i40e XL710 QDA2 as iSCSI initiator results in "RX driver issue detected, PF reset issued", and iscsi ping timeouts</title>
      <link>https://community.intel.com/t5/Ethernet-Products/i40e-XL710-QDA2-as-iSCSI-initiator-results-in-quot-RX-driver/m-p/495646#M9504</link>
      <description>&lt;P&gt;Hi!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The card worked for some hours in PCI Exress 2.0 mode under ~20% load (talking to a 10G target over a Summit 670G2 switch) doing a two-way copy on an iSCSI disk, and then gave a flood of the messages below.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The XL710 is not usable for iSCSI from my perspective. What I am doing wrong? Or is there some emergent interaction between i40e and the iscsi initiator modules (scsi_transport_iscsi, iscsi_tcp, libiscisi, libiscsi_tcp) modules? The kernel is  currently Ubuntu 16.04-s 4.4.0-57-generic.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I removed the card from the server and installed an instance of 82599ES 10-Gigabit SFI/SFP+. Now performance is lower, but the system is stable, i.e. iSCSI is working.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Kind regards,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;--&lt;/P&gt;&lt;P&gt;Juhan&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Jan  2 19:12:10 kernel: [27240.521206] i40e 0000:81:00.0: ARQ Error: Unknown event 0x0000 received&lt;/P&gt;&lt;P&gt;Jan  2 19:12:10 kernel: [27240.521210] i40e 0000:81:00.0: ARQ Error: Unknown event 0x0000 received&lt;/P&gt;&lt;P&gt;Jan  2 19:12:10 kernel: [27240.521214] i40e 0000:81:00.0: ARQ Error: Unknown event 0x0000 received&lt;/P&gt;&lt;P&gt;Jan  2 19:12:10 kernel: [27240.521218] i40e 0000:81:00.0: ARQ Error: Unknown event 0x0000 received&lt;/P&gt;&lt;P&gt;Jan  2 19:12:10 kernel: [27240.521222] i40e 0000:81:00.0: ARQ Error: Unknown event 0x0000 received&lt;/P&gt;&lt;P&gt;Jan  2 19:12:10 kernel: [27240.521226] i40e 0000:81:00.0: ARQ Error: Unknown event 0x0000 received&lt;/P&gt;&lt;P&gt;Jan  2 19:12:10 kernel: [27240.521230] i40e 0000:81:00.0: ARQ Error: Unknown event 0x0000 received&lt;/P&gt;&lt;P&gt;...&lt;/P&gt;</description>
      <pubDate>Tue, 03 Jan 2017 08:57:44 GMT</pubDate>
      <guid>https://community.intel.com/t5/Ethernet-Products/i40e-XL710-QDA2-as-iSCSI-initiator-results-in-quot-RX-driver/m-p/495646#M9504</guid>
      <dc:creator>JErni</dc:creator>
      <dc:date>2017-01-03T08:57:44Z</dc:date>
    </item>
    <item>
      <title>Re: i40e XL710 QDA2 as iSCSI initiator results in "RX driver issue detected, PF reset issued", and iscsi ping timeouts</title>
      <link>https://community.intel.com/t5/Ethernet-Products/i40e-XL710-QDA2-as-iSCSI-initiator-results-in-quot-RX-driver/m-p/495647#M9505</link>
      <description>&lt;P&gt;Hi JPE,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt; Can you try to disable LRO (Large Receive Offload)? Refer to the command below:&lt;P&gt;&amp;nbsp;&lt;/P&gt; ethtool -K ethX lro off&lt;P&gt;&amp;nbsp;&lt;/P&gt; X = XL710 adapter&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;  Please feel free to update me. &lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;Thanks,&lt;P&gt;&amp;nbsp;&lt;/P&gt;wb&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 11 Jan 2017 06:14:05 GMT</pubDate>
      <guid>https://community.intel.com/t5/Ethernet-Products/i40e-XL710-QDA2-as-iSCSI-initiator-results-in-quot-RX-driver/m-p/495647#M9505</guid>
      <dc:creator>idata</dc:creator>
      <dc:date>2017-01-11T06:14:05Z</dc:date>
    </item>
    <item>
      <title>Re: i40e XL710 QDA2 as iSCSI initiator results in "RX driver issue detected, PF reset issued", and iscsi ping timeouts</title>
      <link>https://community.intel.com/t5/Ethernet-Products/i40e-XL710-QDA2-as-iSCSI-initiator-results-in-quot-RX-driver/m-p/495648#M9506</link>
      <description>&lt;P&gt;Hi JPE,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt; Please feel free to update me the test result.&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;Thanks,&lt;P&gt;&amp;nbsp;&lt;/P&gt;wb&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 19 Jan 2017 05:48:44 GMT</pubDate>
      <guid>https://community.intel.com/t5/Ethernet-Products/i40e-XL710-QDA2-as-iSCSI-initiator-results-in-quot-RX-driver/m-p/495648#M9506</guid>
      <dc:creator>idata</dc:creator>
      <dc:date>2017-01-19T05:48:44Z</dc:date>
    </item>
    <item>
      <title>Re: i40e XL710 QDA2 as iSCSI initiator results in "RX driver issue detected, PF reset issued", and iscsi ping timeouts</title>
      <link>https://community.intel.com/t5/Ethernet-Products/i40e-XL710-QDA2-as-iSCSI-initiator-results-in-quot-RX-driver/m-p/495649#M9507</link>
      <description>&lt;P&gt;Hi!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks for the suggestion! It seems the lro off is not yet a full solution to the problem.  I've now updated the NVM to 5.05. The settings and statistics are as reported by ethtool below (the lro is now fixed to OFF in 5.05), accompanied with lspci and dmesg outputs.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The machine still drops the PCI Express speed to 5.0GT/s and issues a &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;[ 8075.145936] i40e 0000:81:00.0: RX driver issue detected, PF reset issued&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;message. The layout of the machine is also included below (NVidia K40 is connected to the same CPU, but was not under load when the driver issue was detected).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;When running iperf TCP benchmark, the machine drops the following messages:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;[157378.969496] NOHZ: local_softirq_pending 08&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I have an exactly same make and revision XL710-DA2 card with NVM update 5.05 attached to another server with dual Intel(R) Xeon(R) CPU E5-2630L v2 @ 2.40GHz CPUs via a 40 GB switch which has jumbo frames enabled. The other computer manages to maintain the PCI Express speed, but drops the occasional&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;[157555.720755] NOHZ: local_softirq_pending 08&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;messages during iperf tests.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;What might cause the "RX driver issue detected, PF reset issued message"? Why does the card drop its PCI Express speed? I will go ahead and run another iSCSI test next week, but it would be great to have a set of tricks to try in case some problems occur. I will now be able to try a few things as currently the systems are in less use than in January.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Kind regards,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;jpe&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Machine 1 (dual E5-2690v3):&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;#  ethtool -k p5p1&lt;/P&gt;&lt;P&gt;Features for p5p1:&lt;/P&gt;&lt;P&gt;rx-checksumming: on&lt;/P&gt;&lt;P&gt;tx-checksumming: on&lt;/P&gt;&lt;P&gt;        tx-checksum-ipv4: on&lt;/P&gt;&lt;P&gt;        tx-checksum-ip-generic: off [fixed]&lt;/P&gt;&lt;P&gt;        tx-checksum-ipv6: on&lt;/P&gt;&lt;P&gt;        tx-checksum-fcoe-crc: off [fixed]&lt;/P&gt;&lt;P&gt;        tx-checksum-sctp: on&lt;/P&gt;&lt;P&gt;scatter-gather: on&lt;/P&gt;&lt;P&gt;        tx-scatter-gather: on&lt;/P&gt;&lt;P&gt;        tx-scatter-gather-fraglist: off [fixed]&lt;/P&gt;&lt;P&gt;tcp-segmentation-offload: on&lt;/P&gt;&lt;P&gt;        tx-tcp-segmentation: on&lt;/P&gt;&lt;P&gt;        tx-tcp-ecn-segmentation: on&lt;/P&gt;&lt;P&gt;        tx-tcp6-segmentation: on&lt;/P&gt;&lt;P&gt;udp-fragmentation-offload: off [fixed]&lt;/P&gt;&lt;P&gt;generic-segmentation-offload: on&lt;/P&gt;&lt;P&gt;generic-receive-offload: on&lt;/P&gt;&lt;P&gt;large-receive-offload: off [fixed]&lt;/P&gt;&lt;P&gt;rx-vlan-offload: on&lt;/P&gt;&lt;P&gt;tx-vlan-offload: on&lt;/P&gt;&lt;P&gt;ntuple-filters: on&lt;/P&gt;&lt;P&gt;receive-hashing: on&lt;/P&gt;&lt;P&gt;highdma: on&lt;/P&gt;&lt;P&gt;rx-vlan-filter: on&lt;/P&gt;&lt;P&gt;vlan-challenged: off [fixed]&lt;/P&gt;&lt;P&gt;tx-lockless: off [fixed]&lt;/P&gt;&lt;P&gt;netns-local: off [fixed]&lt;/P&gt;&lt;P&gt;tx-gso-robust: off [fixed]&lt;/P&gt;&lt;P&gt;tx-fcoe-segmentation: off [fixed]&lt;/P&gt;&lt;P&gt;tx-gre-segmentation: on&lt;/P&gt;&lt;P&gt;tx-ipip-segmentation: off [fixed]&lt;/P&gt;&lt;P&gt;tx-sit-segmentation: off [fixed]&lt;/P&gt;&lt;P&gt;tx-udp_tnl-segmentation: on&lt;/P&gt;&lt;P&gt;fcoe-mtu: off [fixed]&lt;/P&gt;&lt;P&gt;tx-nocache-copy: off&lt;/P&gt;&lt;P&gt;loopback: off [fixed]&lt;/P&gt;&lt;P&gt;rx-fcs: off [fixed]&lt;/P&gt;&lt;P&gt;rx-all: off [fixed]&lt;/P&gt;&lt;P&gt;tx-vlan-stag-hw-insert: off [fixed]&lt;/P&gt;&lt;P&gt;rx-vlan-stag-hw-parse: off [fixed]&lt;/P&gt;&lt;P&gt;rx-vlan-stag-filter: off [fixed]&lt;/P&gt;&lt;P&gt;l2-fwd-offload: off [fixed]&lt;/P&gt;&lt;P&gt;busy-poll: off [fixed]&lt;/P&gt;&lt;P&gt;hw-tc-offload: off [fixed]&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;#  ethtool -S p5p1&lt;/P&gt;&lt;P&gt;NIC statistics:&lt;/P&gt;&lt;P&gt;     rx_packets: 135906582&lt;/P&gt;&lt;P&gt;     tx_packets: 236775208&lt;/P&gt;&lt;P&gt;     rx_bytes: 1086040889842&lt;/P&gt;&lt;P&gt;     tx_bytes: 2035124104972&lt;/P&gt;&lt;P&gt;     rx_errors: 0&lt;/P&gt;&lt;P&gt;     tx_errors: 0&lt;/P&gt;&lt;P&gt;     rx_dropped: 0&lt;/P&gt;&lt;P&gt;     tx_dropped: 0&lt;/P&gt;&lt;P&gt;     collisions: 0&lt;/P&gt;&lt;P&gt;     rx_length_errors: 0&lt;/P&gt;&lt;P&gt;     rx_crc_errors: 0&lt;/P&gt;&lt;P&gt;     rx_unicast: 135906078&lt;/P&gt;&lt;P&gt;     tx_unicast: 236775090&lt;/P&gt;&lt;P&gt;     rx_multicast: 118&lt;/P&gt;&lt;P&gt;     tx_multicast: 118&lt;/P&gt;&lt;P&gt;     rx_broadcast: 386&lt;/P&gt;&lt;P&gt;     tx_broadcast: 0&lt;/P&gt;&lt;P&gt;     rx_unknown_protocol: 0&lt;/P&gt;&lt;P&gt;     tx_linearize: 0&lt;/P&gt;&lt;P&gt;     tx_force_wb: 0&lt;/P&gt;&lt;P&gt;     tx_lost_interrupt: 1&lt;/P&gt;&lt;P&gt;     rx_alloc_fail: 0&lt;/P&gt;&lt;P&gt;     rx_pg_alloc_fail: 0&lt;/P&gt;&lt;P&gt;     fcoe_bad_fccrc: 0&lt;/P&gt;&lt;P&gt;     rx_fcoe_dropped: 0&lt;/P&gt;&lt;P&gt;     rx_fcoe_packets: 0&lt;/P&gt;&lt;P&gt;     rx_fcoe_dwords: 0&lt;/P&gt;&lt;P&gt;     fcoe_ddp_count: 0&lt;/P&gt;&lt;P&gt;     fcoe_last_error: 0&lt;/P&gt;&lt;P&gt;     tx_fcoe_packets: 0&lt;/P&gt;&lt;P&gt;     tx_fcoe_dwords: 0&lt;/P&gt;&lt;P&gt;     tx-0.tx_packets: 154713&lt;/P&gt;&lt;P&gt;     tx-0.tx_bytes: 10657790&lt;/P&gt;&lt;P&gt;     rx-0.rx_packets: 2492215&lt;/P&gt;&lt;P&gt;     rx-0.rx_bytes: 22435014050&lt;/P&gt;&lt;P&gt;     tx-1.tx_packets: 817847&lt;/P&gt;&lt;P&gt;     tx-1.tx_bytes: 56428762&lt;/P&gt;&lt;P&gt;     rx-1.rx_packets: 13056966&lt;/P&gt;&lt;P&gt;     rx-1.rx_bytes: 117518390624&lt;/P&gt;&lt;P&gt;     tx-2.tx_packets: 315825&lt;/P&gt;&lt;P&gt;     tx-2.tx_bytes: 21896326&lt;/P&gt;&lt;P&gt;     rx-2.rx_packets: 4859855&lt;/P&gt;&lt;P&gt;     rx-2.rx_bytes: 43745917117&lt;/P&gt;&lt;P&gt;     tx-3.tx_packets: 891321&lt;/P&gt;&lt;P&gt;     tx-3.tx_bytes: 61440911&lt;/P&gt;&lt;P&gt;     rx-3.rx_packets: 14258155&lt;/P&gt;&lt;P&gt;     rx-3.rx_bytes: 128314969814&lt;/P&gt;&lt;P&gt;     tx-4.tx_packets: 537998&lt;/P&gt;&lt;P&gt;     tx-4.tx_bytes: 37296225&lt;/P&gt;&lt;P&gt;     rx-4.rx_packets: 8434950&lt;/P&gt;&lt;P&gt;     rx-4.rx_bytes: 75941005620&lt;/P&gt;&lt;P&gt;     tx-5.tx_packets: 1114127&lt;/P&gt;&lt;P&gt;     tx-5.tx_bytes: 77321742&lt;/P&gt;&lt;P&gt;     rx-5.rx_packets: 17302666&lt;/P&gt;&lt;P&gt;     rx-5.rx_bytes: 155777322356&lt;/P&gt;&lt;P&gt;     tx-6.tx_packets: 303480&lt;/P&gt;&lt;P&gt;     tx-6.tx_bytes: 21046985&lt;/P&gt;&lt;P&gt;     rx-6.rx_packets: 4733870&lt;/P&gt;&lt;P&gt;     rx-6.rx_bytes: 42627872440&lt;/P&gt;&lt;P&gt;     tx-7.tx_packets: 231648&lt;/P&gt;&lt;P&gt;     tx-7.tx_bytes: 15894117&lt;/P&gt;&lt;P&gt;     rx-7.rx_packets: 3787521&lt;/P&gt;&lt;P&gt;     rx-7.rx_bytes: 34083063902&lt;/P&gt;&lt;P&gt;     tx-8.tx_packets: 25323&lt;/P&gt;&lt;P&gt;     tx-8.tx_bytes: 1748876&lt;/P&gt;&lt;P&gt;     rx-8.rx_packets: 402983&lt;/P&gt;&lt;P&gt;     rx-8.rx_bytes: 3627610198&lt;/P&gt;&lt;P&gt;     tx-9.tx_packets: 552077&lt;/P&gt;&lt;P&gt;     tx-9.tx_bytes: 38129770&lt;/P&gt;&lt;P&gt;     rx-9.rx_packets: 8782639&lt;/P&gt;&lt;P&gt;     rx-9.rx_bytes: 79072508808&lt;/P&gt;&lt;P&gt;     tx-10.tx_packets: 502443&lt;/P&gt;&lt;P&gt;     tx-10.tx_bytes: 34558724&lt;/P&gt;&lt;P&gt;     rx-10.rx_packets: 8123090&lt;/P&gt;&lt;P&gt;     rx-10.rx_bytes: 73076190236&lt;/P&gt;&lt;P&gt;     tx-11.tx_packets: 774191&lt;/P&gt;&lt;P&gt;     tx-11.tx_bytes: 53563104&lt;/P&gt;&lt;P&gt;     rx-11.rx_packets: 12196082&lt;/P&gt;&lt;P&gt;     rx-11.rx_bytes: 109795466384&lt;/P&gt;&lt;P&gt;     tx-12.tx_packets: 6254&lt;/P&gt;&lt;P&gt;     tx-12.tx_bytes: 438620&lt;/P&gt;&lt;P&gt;     rx-12.rx_packets: 98070&lt;/P&gt;&lt;P&gt;     rx-12.rx_bytes: 883451748&lt;/P&gt;&lt;P&gt;     tx-13.tx_packets: 9&lt;/P&gt;&lt;P&gt;     tx-13.tx_bytes: 378&lt;/P&gt;&lt;P&gt;     rx-13.rx_packets: 412&lt;/P&gt;&lt;P&gt;     rx-13.rx_bytes: 24720&lt;/P&gt;&lt;P&gt;     tx-14.tx_packets: 0&lt;/P&gt;&lt;P&gt;     tx-14.tx_bytes: 0&lt;/P&gt;&lt;P&gt;     rx-14.rx_packets: 0&lt;/P&gt;&lt;P&gt;     rx-14.rx_bytes: 0&lt;/P&gt;&lt;P&gt;     tx-15.tx_packets: 195116688&lt;/P&gt;&lt;P&gt;     tx-15.tx_bytes: 1758770157440&lt;/P&gt;&lt;P&gt;     rx-15.rx_packets: 12424945&lt;/P&gt;&lt;P&gt;     rx-15.rx_bytes: 820062822&lt;/P&gt;&lt;P&gt;     tx-16.tx_packets: 29386293&lt;/P&gt;&lt;P&gt;     tx-16.tx_bytes: 250603243370&lt;/P&gt;&lt;P&gt;     rx-16.rx_packets: 1726683&lt;/P&gt;&lt;P&gt;     rx-16.rx_bytes: 113961282&lt;/P&gt;&lt;P&gt;     tx-17.tx_packets: 0&lt;/P&gt;&lt;P&gt;     tx-17.tx_bytes: 0&lt;/P&gt;&lt;P&gt;     rx-17.rx_packets: 0&lt;/P&gt;&lt;P&gt;     rx-17.rx_bytes: 0&lt;/P&gt;&lt;P&gt;     tx-18.tx_packets: 0&lt;/P&gt;&lt;P&gt;     tx-18.tx_bytes: 0&lt;/P&gt;&lt;P&gt;     rx-18.rx_packets: 0&lt;/P&gt;&lt;P&gt;     rx-18.rx_bytes: 0&lt;/P&gt;&lt;P&gt;     tx-19.tx_packets: 200620&lt;/P&gt;&lt;P&gt;     tx-19.tx_bytes: 14138912&lt;/P&gt;&lt;P&gt;     rx-19.rx_packets: 2849304&lt;/P&gt;&lt;P&gt;     rx-19.rx_bytes...&lt;/P&gt;</description>
      <pubDate>Thu, 20 Jul 2017 08:53:19 GMT</pubDate>
      <guid>https://community.intel.com/t5/Ethernet-Products/i40e-XL710-QDA2-as-iSCSI-initiator-results-in-quot-RX-driver/m-p/495649#M9507</guid>
      <dc:creator>JErni</dc:creator>
      <dc:date>2017-07-20T08:53:19Z</dc:date>
    </item>
    <item>
      <title>Re: i40e XL710 QDA2 as iSCSI initiator results in "RX driver issue detected, PF reset issued", and iscsi ping timeouts</title>
      <link>https://community.intel.com/t5/Ethernet-Products/i40e-XL710-QDA2-as-iSCSI-initiator-results-in-quot-RX-driver/m-p/495650#M9508</link>
      <description>&lt;P&gt;Hi JPE,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt; Thank you very much for the update. Just want to clarify as you mentioned below you have another same make of XL710-DA2 &lt;P&gt;&amp;nbsp;&lt;/P&gt;" I have an exactly same make and revision XL710-DA2 card with NVM update 5.05 attached to another server with dual Intel(R) Xeon(R) CPU E5-2630L v2 @ 2.40GHz CPUs via a 40 GB switch which has jumbo frames enabled. The other computer manages to maintain the PCI Express speed, but drops the occasional&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;[157555.720755] NOHZ: local_softirq_pending 08"&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;Have you compare the configuration of these two NICs and is it possible to configure the one with the "RX driver issue detected, PF reset issued message" with the same setting as the other one?&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;Thanks,&lt;P&gt;&amp;nbsp;&lt;/P&gt;sharon&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 21 Jul 2017 02:36:16 GMT</pubDate>
      <guid>https://community.intel.com/t5/Ethernet-Products/i40e-XL710-QDA2-as-iSCSI-initiator-results-in-quot-RX-driver/m-p/495650#M9508</guid>
      <dc:creator>idata</dc:creator>
      <dc:date>2017-07-21T02:36:16Z</dc:date>
    </item>
    <item>
      <title>Re: i40e XL710 QDA2 as iSCSI initiator results in "RX driver issue detected, PF reset issued", and iscsi ping timeouts</title>
      <link>https://community.intel.com/t5/Ethernet-Products/i40e-XL710-QDA2-as-iSCSI-initiator-results-in-quot-RX-driver/m-p/495651#M9509</link>
      <description>&lt;P&gt;Hi JPE,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt; Please feel free to provide the information.&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;regards,&lt;P&gt;&amp;nbsp;&lt;/P&gt;sharon&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 28 Jul 2017 05:36:48 GMT</pubDate>
      <guid>https://community.intel.com/t5/Ethernet-Products/i40e-XL710-QDA2-as-iSCSI-initiator-results-in-quot-RX-driver/m-p/495651#M9509</guid>
      <dc:creator>idata</dc:creator>
      <dc:date>2017-07-28T05:36:48Z</dc:date>
    </item>
  </channel>
</rss>

