Ethernet Products
Determine ramifications of Intel® Ethernet products and technologies
Announcements
For support on Altera products please visit the Altera Community Forums.
5861 Discussions

INTEL E810-CQDA2 found a unknown address to request sg list

rdma_fresh1234
Beginner
1,339 Views

While performing RDMA operations, I observed some memory addresses related to the scatter-gather (SG) table in the logs printed by the driver. When I captured and analyzed PCIe traffic using a FPGA ILA probe, I found that these addresses were obtained via requests to an unregistered/undeclared memory address. Moreover, during each RDMA operation, the SG table memory addresses are consistently fetched from this unknown address. I would like to understand where this address originates from.

 

 

企业微信截图_17738175051124.png

 

ILA example :

req package data:000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 003800001c0000100000000fb47d2000
resp package data:000000000000000fd059f000000000100b1170000000000fcdf8e0000000000fcdd020000000000fcf60e0000000000fcf206000 010000001c00001000400000

The unknown address is 0000000fb47d2000.

Thx.

0 Kudos
13 Replies
Goutham_intel
Moderator
1,312 Views

Thank you for reaching out and sharing the details. To ensure you get the most accurate and timely response, there is a dedicated forum for these types of issues. I’m moving your thread there so our specialized team can assist you directly.

 

We truly appreciate your patience and look forward to helping you resolve this quickly.

 

Best regards,

Goutham,

Intel Customer Support Technician


0 Kudos
pujeeth
Employee
1,275 Views

Hello rdma_fresh1234,


Greetings!


Thank you for contacting Intel.


We understand that you are encountering an "unregistered/undeclared memory address" error while performing RDMA operations. To assist you further, we kindly request that you provide the following information:


1) Complete model of the adapter

2) Details of any recent changes made to the system (hardware or software)

3) Confirmation on whether the adapter came with the system or was purchased separately

4) SSU logs for further analysis

https://www.intel.com/content/www/us/en/support/articles/000008563/ethernet-products.html


Regards

Pujeeth_Intel


0 Kudos
rdma_fresh1234
Beginner
1,244 Views

1) Adapter model is E810-CQDA2.

2) Linux System : Linux test 5.4.0-216-generic #236-Ubuntu SMP Fri Apr 11 19:53:21 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux,

irdma driver version : 1.10.15, rdma tools is rdma-example-master(from github).

3) I purchased this network adapter separately, and then downloaded the driver version from the Intel official website according to the online instructions.

4) I found that this address behaves like a fixed address negotiated between the host and the device when the irdma driver is loaded, possibly similar to the PBL described in the Ethernet Controller E810 Datasheet doc. During each RDMA operation, this fixed address is accessed to obtain an SG address table.

When the NIC processes CQP commands such as Allocate/Register/RegisterShared/Deallocate STag Descriptor Format(OP = 0x0A), if Leaf_PBL_Size  is not 00b, the NIC applies a fixed offset to this address and accesses it, then retrieves a set of Physical_Buffer_Address entries corresponding to the MR.

 

Thx.

0 Kudos
Sazirah
Employee
1,222 Views

Hello rdma_fresh1234,


Thank you for sharing the information requested. 


For us to check further, kindly help to generate the SSU log and share with us. You may follow steps in link below:


https://www.intel.com/content/www/us/en/support/articles/000008563/ethernet-products.html


Regards,

Sazzy_Intel

Intel Customer Support Technician


0 Kudos
Shankith
Employee
1,222 Views

Hello rdma_fresh1234,


Thank you for sharing the details with us.


To proceed with further analysis, kindly provide the SSU (System Support Utility) logs at your convenience.


You can generate the SSU logs by following the instructions available at the link below:

https://www.intel.com/content/www/us/en/support/articles/000008563/ethernet-products.html


Regards,

Shankith K P

Intel Customer Support Technician


0 Kudos
rdma_fresh1234
Beginner
1,175 Views

I used one QSFP interface of the E810

Below is a part of log.txt contents:

[15013.182015] ice 0000:1c:00.0 ens1f0: NIC Link is up 100 Gbps Full Duplex, Requested FEC: RS-FEC, Negotiated FEC: RS-FEC, Autoneg Advertised: On, Autoneg Negotiated: True, Flow Control: None
[15013.190914] 8021q: adding VLAN 0 to HW filter on device ens1f0
[15013.191018] IPv6: ADDRCONF(NETDEV_CHANGE): ens1f0: link becomes ready
[15014.920456] ice 0000:1c:00.0: Commit DCB Configuration to the hardware
[15014.999472] INFO: Flow control is disabled for this traffic class (0) on this vsi.
[15015.489511] 8021q: adding VLAN 0 to HW filter on device ens1f0
[15015.489571] ice 0000:1c:00.0: Commit DCB Configuration to the hardware
[15015.565982] INFO: Flow control is disabled for this traffic class (0) on this vsi.
[15016.061908] 8021q: adding VLAN 0 to HW filter on device ens1f0
[15079.391078] ens1f1 speed is unknown, defaulting to 1000
[15079.391672] ens1f1 speed is unknown, defaulting to 1000
[15146.943249] ens1f1 speed is unknown, defaulting to 1000
[15146.987914] irdma_dbg_pf_exit: removing debugfs entries
[15147.003532] infiniband iwp28s0f0: ib_query_port failed (-19)
[15147.047902] irdma_dbg_pf_exit: removing debugfs entries
[15154.871234] irdma driver version: 2.0.34
[15154.871319] irdma: minor version mismatch: expected 10.4 caller specified 10.2
[15154.871323] probe: cdev_info=00000000b5adb8ea, cdev_info->dev.aux_dev.bus->number=28, cdev_info->rdma_active_port=0xff netdev=ens1f0
[15154.871350] irdma: Because roce_ena is ENABLED, roce_port_cfg will be ignored.
[15154.871356] ice 0000:1c:00.0: irdma_fill_device_info: iwdev->lag_mode = 0
[15154.993442] irdma_dbg_prep_dump_buf: irdma_dbg_dump_buf_len = 16384
[15154.996056] INFO: Flow control is disabled for this traffic class (0) on this vsi.
[15154.997132] irdma: minor version mismatch: expected 10.4 caller specified 10.2
[15154.997135] probe: cdev_info=000000001fc9a706, cdev_info->dev.aux_dev.bus->number=28, cdev_info->rdma_active_port=0xff netdev=ens1f1
[15154.997164] ice 0000:1c:00.1: irdma_fill_device_info: iwdev->lag_mode = 0
[15155.116780] ens1f1 speed is unknown, defaulting to 1000
[15155.117020] ens1f1 speed is unknown, defaulting to 1000
[15155.117287] ens1f1 speed is unknown, defaulting to 1000
[15155.118855] INFO: Flow control is disabled for this traffic class (0) on this vsi.

...

#Interface ens1f0
NIC statistics:
rx_unicast: 7
tx_unicast: 7
rx_multicast: 19
tx_multicast: 46
rx_broadcast: 0
tx_broadcast: 1
rx_bytes: 2967
tx_bytes: 5318
rx_dropped: 1
rx_unknown_protocol: 0
rx_alloc_fail: 0
rx_pg_alloc_fail: 0
tx_errors: 0
tx_linearized: 0
tx_busy: 0
tx_restart: 0
tx_queue_0_packets: 0
tx_queue_0_bytes: 0
tx_queue_1_packets: 0
tx_queue_1_bytes: 0
tx_queue_2_packets: 12
tx_queue_2_bytes: 936
tx_queue_3_packets: 0
tx_queue_3_bytes: 0
tx_queue_4_packets: 27
tx_queue_4_bytes: 1971
tx_queue_5_packets: 0
tx_queue_5_bytes: 0
tx_queue_6_packets: 0
tx_queue_6_bytes: 0
tx_queue_7_packets: 0
tx_queue_7_bytes: 0
tx_queue_8_packets: 0
tx_queue_8_bytes: 0
tx_queue_9_packets: 0
tx_queue_9_bytes: 0
tx_queue_10_packets: 0
tx_queue_10_bytes: 0
tx_queue_11_packets: 0
tx_queue_11_bytes: 0
tx_queue_12_packets: 0
tx_queue_12_bytes: 0
tx_queue_13_packets: 0
tx_queue_13_bytes: 0
tx_queue_14_packets: 0
tx_queue_14_bytes: 0
tx_queue_15_packets: 0
tx_queue_15_bytes: 0
tx_queue_16_packets: 0
tx_queue_16_bytes: 0
tx_queue_17_packets: 0
tx_queue_17_bytes: 0
tx_queue_18_packets: 0
tx_queue_18_bytes: 0
tx_queue_19_packets: 0
tx_queue_19_bytes: 0
tx_queue_20_packets: 0
tx_queue_20_bytes: 0
tx_queue_21_packets: 1
tx_queue_21_bytes: 60
tx_queue_22_packets: 0
tx_queue_22_bytes: 0
tx_queue_23_packets: 0
tx_queue_23_bytes: 0
tx_queue_24_packets: 0
tx_queue_24_bytes: 0
tx_queue_25_packets: 0
tx_queue_25_bytes: 0
tx_queue_26_packets: 0
tx_queue_26_bytes: 0
tx_queue_27_packets: 0
tx_queue_27_bytes: 0
tx_queue_28_packets: 0
tx_queue_28_bytes: 0
tx_queue_29_packets: 0
tx_queue_29_bytes: 0
tx_queue_30_packets: 0
tx_queue_30_bytes: 0
tx_queue_31_packets: 0
tx_queue_31_bytes: 0
tx_queue_32_packets: 1
tx_queue_32_bytes: 73
tx_queue_33_packets: 0
tx_queue_33_bytes: 0
tx_queue_34_packets: 0
tx_queue_34_bytes: 0
tx_queue_35_packets: 6
tx_queue_35_bytes: 516
tx_queue_36_packets: 0
tx_queue_36_bytes: 0
tx_queue_37_packets: 0
tx_queue_37_bytes: 0
tx_queue_38_packets: 0
tx_queue_38_bytes: 0
tx_queue_39_packets: 0
tx_queue_39_bytes: 0
tx_queue_40_packets: 0
tx_queue_40_bytes: 0
tx_queue_41_packets: 0
tx_queue_41_bytes: 0
tx_queue_42_packets: 0
tx_queue_42_bytes: 0
tx_queue_43_packets: 0
tx_queue_43_bytes: 0
tx_queue_44_packets: 0
tx_queue_44_bytes: 0
tx_queue_45_packets: 0
tx_queue_45_bytes: 0
tx_queue_46_packets: 0
tx_queue_46_bytes: 0
tx_queue_47_packets: 0
tx_queue_47_bytes: 0
tx_queue_48_packets: 0
tx_queue_48_bytes: 0
tx_queue_49_packets: 0
tx_queue_49_bytes: 0
tx_queue_50_packets: 0
tx_queue_50_bytes: 0
tx_queue_51_packets: 0
tx_queue_51_bytes: 0
tx_queue_52_packets: 0
tx_queue_52_bytes: 0
tx_queue_53_packets: 0
tx_queue_53_bytes: 0
tx_queue_54_packets: 0
tx_queue_54_bytes: 0
tx_queue_55_packets: 0
tx_queue_55_bytes: 0
tx_queue_56_packets: 0
tx_queue_56_bytes: 0
tx_queue_57_packets: 0
tx_queue_57_bytes: 0
tx_queue_58_packets: 0
tx_queue_58_bytes: 0
tx_queue_59_packets: 0
tx_queue_59_bytes: 0
tx_queue_60_packets: 0
tx_queue_60_bytes: 0
tx_queue_61_packets: 0
tx_queue_61_bytes: 0
tx_queue_62_packets: 0
tx_queue_62_bytes: 0
tx_queue_63_packets: 0
tx_queue_63_bytes: 0
tx_queue_64_packets: 0
tx_queue_64_bytes: 0
tx_queue_65_packets: 0
tx_queue_65_bytes: 0
tx_queue_66_packets: 0
tx_queue_66_bytes: 0
tx_queue_67_packets: 0
tx_queue_67_bytes: 0
tx_queue_68_packets: 0
tx_queue_68_bytes: 0
tx_queue_69_packets: 0
tx_queue_69_bytes: 0
tx_queue_70_packets: 0
tx_queue_70_bytes: 0
tx_queue_71_packets: 0
tx_queue_71_bytes: 0
tx_queue_72_packets: 0
tx_queue_72_bytes: 0
tx_queue_73_packets: 0
tx_queue_73_bytes: 0
tx_queue_74_packets: 0
tx_queue_74_bytes: 0
tx_queue_75_packets: 0
tx_queue_75_bytes: 0
tx_queue_76_packets: 0
tx_queue_76_bytes: 0
tx_queue_77_packets: 0
tx_queue_77_bytes: 0
tx_queue_78_packets: 0
tx_queue_78_bytes: 0
tx_queue_79_packets: 0
tx_queue_79_bytes: 0
rx_queue_0_packets: 19
rx_queue_0_bytes: 1374
rx_queue_1_packets: 0
rx_queue_1_bytes: 0
rx_queue_2_packets: 0
rx_queue_2_bytes: 0
rx_queue_3_packets: 0
rx_queue_3_bytes: 0
rx_queue_4_packets: 0
rx_queue_4_bytes: 0
rx_queue_5_packets: 0
rx_queue_5_bytes: 0
rx_queue_6_packets: 0
rx_queue_6_bytes: 0
rx_queue_7_packets: 0
rx_queue_7_bytes: 0
rx_queue_8_packets: 0
rx_queue_8_bytes: 0
rx_queue_9_packets: 0
rx_queue_9_bytes: 0
rx_queue_10_packets: 0
rx_queue_10_bytes: 0
rx_queue_11_packets: 0
rx_queue_11_bytes: 0
rx_queue_12_packets: 0
rx_queue_12_bytes: 0
rx_queue_13_packets: 0
rx_queue_13_bytes: 0
rx_queue_14_packets: 0
rx_queue_14_bytes: 0
rx_queue_15_packets: 0
rx_queue_15_bytes: 0
rx_queue_16_packets: 0
rx_queue_16_bytes: 0
rx_queue_17_packets: 0
rx_queue_17_bytes: 0
rx_queue_18_packets: 0
rx_queue_18_bytes: 0
rx_queue_19_packets: 0
rx_queue_19_bytes: 0
rx_queue_20_packets: 0
rx_queue_20_bytes: 0
rx_queue_21_packets: 0
rx_queue_21_bytes: 0
rx_queue_22_packets: 0
rx_queue_22_bytes: 0
rx_queue_23_packets: 0
rx_queue_23_bytes: 0
rx_queue_24_packets: 0
rx_queue_24_bytes: 0
rx_queue_25_packets: 0
rx_queue_25_bytes: 0
rx_queue_26_packets: 0
rx_queue_26_bytes: 0
rx_queue_27_packets: 0
rx_queue_27_bytes: 0
rx_queue_28_packets: 0
rx_queue_28_bytes: 0
rx_queue_29_packets: 0
rx_queue_29_bytes: 0
rx_queue_30_packets: 0
rx_queue_30_bytes: 0
rx_queue_31_packets: 0
rx_queue_31_bytes: 0
rx_queue_32_packets: 0
rx_queue_32_bytes: 0
rx_queue_33_packets: 0
rx_queue_33_bytes: 0
rx_queue_34_packets: 0
rx_queue_34_bytes: 0
rx_queue_35_packets: 0
rx_queue_35_bytes: 0
rx_queue_36_packets: 0
rx_queue_36_bytes: 0
rx_queue_37_packets: 0
rx_queue_37_bytes: 0
rx_queue_38_packets: 0
rx_queue_38_bytes: 0
rx_queue_39_packets: 0
rx_queue_39_bytes: 0
rx_queue_40_packets: 0
rx_queue_40_bytes: 0
rx_queue_41_packets: 0
rx_queue_41_bytes: 0
rx_queue_42_packets: 0
rx_queue_42_bytes: 0
rx_queue_43_packets: 0
rx_queue_43_bytes: 0
rx_queue_44_packets: 0
rx_queue_44_bytes: 0
rx_queue_45_packets: 0
rx_queue_45_bytes: 0
rx_queue_46_packets: 0
rx_queue_46_bytes: 0
rx_queue_47_packets: 0
rx_queue_47_bytes: 0
rx_queue_48_packets: 0
rx_queue_48_bytes: 0
rx_queue_49_packets: 0
rx_queue_49_bytes: 0
rx_queue_50_packets: 0
rx_queue_50_bytes: 0
rx_queue_51_packets: 0
rx_queue_51_bytes: 0
rx_queue_52_packets: 0
rx_queue_52_bytes: 0
rx_queue_53_packets: 0
rx_queue_53_bytes: 0
rx_queue_54_packets: 0
rx_queue_54_bytes: 0
rx_queue_55_packets: 0
rx_queue_55_bytes: 0
rx_queue_56_packets: 0
rx_queue_56_bytes: 0
rx_queue_57_packets: 0
rx_queue_57_bytes: 0
rx_queue_58_packets: 0
rx_queue_58_bytes: 0
rx_queue_59_packets: 0
rx_queue_59_bytes: 0
rx_queue_60_packets: 0
rx_queue_60_bytes: 0
rx_queue_61_packets: 0
rx_queue_61_bytes: 0
rx_queue_62_packets: 0
rx_queue_62_bytes: 0
rx_queue_63_packets: 0
rx_queue_63_bytes: 0
rx_queue_64_packets: 0
rx_queue_64_bytes: 0
rx_queue_65_packets: 0
rx_queue_65_bytes: 0
rx_queue_66_packets: 0
rx_queue_66_bytes: 0
rx_queue_67_packets: 0
rx_queue_67_bytes: 0
rx_queue_68_packets: 0
rx_queue_68_bytes: 0
rx_queue_69_packets: 0
rx_queue_69_bytes: 0
rx_queue_70_packets: 0
rx_queue_70_bytes: 0
rx_queue_71_packets: 0
rx_queue_71_bytes: 0
rx_queue_72_packets: 0
rx_queue_72_bytes: 0
rx_queue_73_packets: 0
rx_queue_73_bytes: 0
rx_queue_74_packets: 0
rx_queue_74_bytes: 0
rx_queue_75_packets: 0
rx_queue_75_bytes: 0
rx_queue_76_packets: 0
rx_queue_76_bytes: 0
rx_queue_77_packets: 0
rx_queue_77_bytes: 0
rx_queue_78_packets: 0
rx_queue_78_bytes: 0
rx_queue_79_packets: 0
rx_queue_79_bytes: 0
rx_bytes.nic: 41837
tx_bytes.nic: 5534
rx_unicast.nic: 7
tx_unicast.nic: 7
rx_multicast.nic: 524
tx_multicast.nic: 46
rx_broadcast.nic: 0
tx_broadcast.nic: 1
tx_errors.nic: 0
tx_timeout.nic: 0
rx_size_64.nic: 1
tx_size_64.nic: 1
rx_size_127.nic: 527
tx_size_127.nic: 49
rx_size_255.nic: 0
tx_size_255.nic: 0
rx_size_511.nic: 2
tx_size_511.nic: 3
rx_size_1023.nic: 1
tx_size_1023.nic: 1
rx_size_1522.nic: 0
tx_size_1522.nic: 0
rx_size_big.nic: 0
tx_size_big.nic: 0
link_xon_rx.nic: 0
link_xon_tx.nic: 0
link_xoff_rx.nic: 0
link_xoff_tx.nic: 0
tx_dropped_link_down.nic: 0
rx_undersize.nic: 0
rx_fragments.nic: 0
rx_oversize.nic: 0
rx_jabber.nic: 0
rx_csum_bad.nic: 0
rx_length_errors.nic: 0
rx_dropped.nic: 0
rx_crc_errors.nic: 0
illegal_bytes.nic: 0
mac_local_faults.nic: 0
mac_remote_faults.nic: 0
fdir_sb_match.nic: 0
fdir_sb_status.nic: 1
chnl_inline_fd_match: 0
tx_hwtstamp_skipped: 0
tx_hwtstamp_timeouts: 0
tx_hwtstamp_flushed: 0
tx_hwtstamp_discarded: 0
late_cached_phc_updates: 0
tx_priority_0_xon.nic: 0
tx_priority_0_xoff.nic: 0
tx_priority_1_xon.nic: 0
tx_priority_1_xoff.nic: 0
tx_priority_2_xon.nic: 0
tx_priority_2_xoff.nic: 0
tx_priority_3_xon.nic: 0
tx_priority_3_xoff.nic: 0
tx_priority_4_xon.nic: 0
tx_priority_4_xoff.nic: 0
tx_priority_5_xon.nic: 0
tx_priority_5_xoff.nic: 0
tx_priority_6_xon.nic: 0
tx_priority_6_xoff.nic: 0
tx_priority_7_xon.nic: 0
tx_priority_7_xoff.nic: 0
rx_priority_0_xon.nic: 0
rx_priority_0_xoff.nic: 0
rx_priority_1_xon.nic: 0
rx_priority_1_xoff.nic: 0
rx_priority_2_xon.nic: 0
rx_priority_2_xoff.nic: 0
rx_priority_3_xon.nic: 0
rx_priority_3_xoff.nic: 0
rx_priority_4_xon.nic: 0
rx_priority_4_xoff.nic: 0
rx_priority_5_xon.nic: 0
rx_priority_5_xoff.nic: 0
rx_priority_6_xon.nic: 0
rx_priority_6_xoff.nic: 0
rx_priority_7_xon.nic: 0
rx_priority_7_xoff.nic: 0

...

Mar 19 06:39:23 test kernel: [15013.182015] ice 0000:1c:00.0 ens1f0: NIC Link is up 100 Gbps Full Duplex, Requested FEC: RS-FEC, Negotiated FEC: RS-FEC, Autoneg Advertised: On, Autoneg Negotiated: True, Flow Control: None
Mar 19 06:39:23 test kernel: [15013.190914] 8021q: adding VLAN 0 to HW filter on device ens1f0
Mar 19 06:39:23 test kernel: [15013.191018] IPv6: ADDRCONF(NETDEV_CHANGE): ens1f0: link becomes ready
Mar 19 06:39:23 test systemd-networkd[1868]: ens1f0: Link UP
Mar 19 06:39:23 test systemd-networkd[1868]: ens1f0: Gained carrier
Mar 19 06:39:24 test systemd-networkd[1868]: ens1f0: Gained IPv6LL
Mar 19 06:39:24 test kernel: [15014.920456] ice 0000:1c:00.0: Commit DCB Configuration to the hardware
Mar 19 06:39:25 test kernel: [15014.999472] INFO: Flow control is disabled for this traffic class (0) on this vsi.
Mar 19 06:39:25 test systemd-networkd[1868]: ens1f0: Link DOWN
Mar 19 06:39:25 test systemd-networkd[1868]: ens1f0: Lost carrier
Mar 19 06:39:25 test systemd-networkd[1868]: ens1f0: Link UP
Mar 19 06:39:25 test systemd-networkd[1868]: ens1f0: Gained carrier
Mar 19 06:39:25 test kernel: [15015.489511] 8021q: adding VLAN 0 to HW filter on device ens1f0
Mar 19 06:39:25 test kernel: [15015.489571] ice 0000:1c:00.0: Commit DCB Configuration to the hardware
Mar 19 06:39:25 test kernel: [15015.565982] INFO: Flow control is disabled for this traffic class (0) on this vsi.
Mar 19 06:39:25 test systemd-networkd[1868]: ens1f0: Link DOWN
Mar 19 06:39:25 test systemd-networkd[1868]: ens1f0: Lost carrier
Mar 19 06:39:26 test systemd-networkd[1868]: ens1f0: Link UP
Mar 19 06:39:26 test systemd-networkd[1868]: ens1f0: Gained carrier
Mar 19 06:39:26 test kernel: [15016.061908] 8021q: adding VLAN 0 to HW filter on device ens1f0
Mar 19 06:39:26 test lldpad[1902]: l2_packet_send - send: Network is down
Mar 19 06:39:26 test lldpad[1902]: recvfrom(Event interface): No buffer space available
Mar 19 06:39:27 test systemd-networkd[1868]: ens1f0: Gained IPv6LL
Mar 19 06:39:42 test host_ams[2085]: [2102] [2026-03-19 06:39:42.204] [console] [error] [pcie_channel.cpp:19] recv pcie msg failed, err: Receive message from pcie server exceeds timeout.
Mar 19 06:40:12 test host_ams[2085]: [2102] [2026-03-19 06:40:12.244] [console] [error] [pcie_channel.cpp:19] recv pcie msg failed, err: Receive message from pcie server exceeds timeout.
Mar 19 06:40:29 test kernel: [15079.391078] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:40:29 test kernel: [15079.391672] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:40:42 test host_ams[2085]: [2102] [2026-03-19 06:40:42.285] [console] [error] [pcie_channel.cpp:19] recv pcie msg failed, err: Receive message from pcie server exceeds timeout.
Mar 19 06:41:12 test host_ams[2085]: [2102] [2026-03-19 06:41:12.325] [console] [error] [pcie_channel.cpp:19] recv pcie msg failed, err: Receive message from pcie server exceeds timeout.
Mar 19 06:41:22 test pcie_server[2072]: [2090] [2026-03-19 06:41:22.959] [console] [info] [pcie.cpp:69] [PcieSend] Send fd 3 datasize 160
Mar 19 06:41:37 test kernel: [15146.943249] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:37 test kernel: [15146.987914] irdma_dbg_pf_exit: removing debugfs entries
Mar 19 06:41:37 test kernel: [15147.003532] infiniband iwp28s0f0: ib_query_port failed (-19)
Mar 19 06:41:37 test kernel: [15147.047902] irdma_dbg_pf_exit: removing debugfs entries
Mar 19 06:41:42 test host_ams[2085]: [2102] [2026-03-19 06:41:42.365] [console] [error] [pcie_channel.cpp:19] recv pcie msg failed, err: Receive message from pcie server exceeds timeout.
Mar 19 06:41:44 test kernel: [15154.871234] irdma driver version: 2.0.34
Mar 19 06:41:44 test kernel: [15154.871319] irdma: minor version mismatch: expected 10.4 caller specified 10.2
Mar 19 06:41:44 test kernel: [15154.871323] probe: cdev_info=00000000b5adb8ea, cdev_info->dev.aux_dev.bus->number=28, cdev_info->rdma_active_port=0xff netdev=ens1f0
Mar 19 06:41:44 test kernel: [15154.871350] irdma: Because roce_ena is ENABLED, roce_port_cfg will be ignored.
Mar 19 06:41:44 test kernel: [15154.871356] ice 0000:1c:00.0: irdma_fill_device_info: iwdev->lag_mode = 0
Mar 19 06:41:45 test kernel: [15154.993442] irdma_dbg_prep_dump_buf: irdma_dbg_dump_buf_len = 16384
Mar 19 06:41:45 test kernel: [15154.996056] INFO: Flow control is disabled for this traffic class (0) on this vsi.
Mar 19 06:41:45 test kernel: [15154.997132] irdma: minor version mismatch: expected 10.4 caller specified 10.2
Mar 19 06:41:45 test kernel: [15154.997135] probe: cdev_info=000000001fc9a706, cdev_info->dev.aux_dev.bus->number=28, cdev_info->rdma_active_port=0xff netdev=ens1f1
Mar 19 06:41:45 test kernel: [15154.997164] ice 0000:1c:00.1: irdma_fill_device_info: iwdev->lag_mode = 0
Mar 19 06:41:45 test kernel: [15155.116780] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:45 test kernel: [15155.117020] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:45 test kernel: [15155.117287] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:45 test kernel: [15155.118855] INFO: Flow control is disabled for this traffic class (0) on this vsi.
Mar 19 06:41:45 test kernel: [15155.119723] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.322706] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.323171] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.323433] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.323694] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.323972] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.324236] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.324499] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.324762] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.325024] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.325289] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.325551] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.325813] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.326074] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.326336] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.326597] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.326859] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.327121] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.327386] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.327648] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.327910] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.328174] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.328436] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.328700] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.328963] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.329226] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.329488] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.329749] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.330010] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.330288] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.330549] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.330811] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:41:50 test kernel: [15160.331072] ens1f1 speed is unknown, defaulting to 1000
Mar 19 06:42:12 test host_ams[2085]: [2102] [2026-03-19 06:42:12.406] [console] [error] [pcie_channel.cpp:19] recv pcie msg failed, err: Receive message from pcie server exceeds timeout.
Mar 19 06:42:42 test host_ams[2085]: [2102] [2026-03-19 06:42:42.446] [console] [error] [pcie_channel.cpp:19] recv pcie msg failed, err: Receive message from pcie server exceeds timeout.
Mar 19 06:43:12 test host_ams[2085]: [2102] [2026-03-19 06:43:12.486] [console] [error] [pcie_channel.cpp:19] recv pcie msg failed, err: Receive message from pcie server exceeds timeout.
Mar 19 06:43:42 test host_ams[2085]: [2102] [2026-03-19 06:43:42.527] [console] [error] [pcie_channel.cpp:19] recv pcie msg failed, err: Receive message from pcie server exceeds timeout.
Mar 19 06:44:12 test host_ams[2085]: [2102] [2026-03-19 06:44:12.567] [console] [error] [pcie_channel.cpp:19] recv pcie msg failed, err: Receive message from pcie server exceeds timeout.
Mar 19 06:44:42 test host_ams[2085]: [2102] [2026-03-19 06:44:42.607] [console] [error] [pcie_channel.cpp:19] recv pcie msg failed, err: Receive message from pcie server exceeds timeout.
Mar 19 06:45:12 test host_ams[2085]: [2102] [2026-03-19 06:45:12.647] [console] [error] [pcie_channel.cpp:19] recv pcie msg failed, err: Receive message from pcie server exceeds timeout.
Mar 19 06:45:42 test host_ams[2085]: [2102] [2026-03-19 06:45:42.688] [console] [error] [pcie_channel.cpp:19] recv pcie msg failed, err: Receive message from pcie server exceeds timeout.
Mar 19 06:46:12 test host_ams[2085]: [2102] [2026-03-19 06:46:12.728] [console] [error] [pcie_channel.cpp:19] recv pcie msg failed, err: Receive message from pcie server exceeds timeout.
Mar 19 06:46:42 test host_ams[2085]: [2102] [2026-03-19 06:46:42.768] [console] [error] [pcie_channel.cpp:19] recv pcie msg failed, err: Receive message from pcie server exceeds timeout.

 

 

By the way, that address changes when the irdma driver be loaded.

So i think it may be a PBL address, could someone tell me how to know PBL address about E810-CQDA2?

0 Kudos
pujeeth
Employee
1,161 Views

Hello rdma_fresh1234,


Greetings!


Thank you for sharing the output. We are currently reviewing the case and will get back to you with an update as soon as possible.


Regards,

Pujeeth_Intel


0 Kudos
Shankith
Employee
779 Views

Hello rdma_fresh1234,


Greetings!


We would like to inform you that we are still reviewing this case and will require some additional time.


We will get back to you as soon as possible.


Thank you for your patience and understanding.


Regards,

Shankith K P

Intel Customer Support Technician



0 Kudos
rdma_fresh1234
Beginner
776 Views

Hi Shankith,

 

Thanks for your reply!

 

After my investigation and analysis over this period, I believe that this address corresponds to the memory location where the PBLE resides. And can you tell me how does the NIC know the physical memory address where the PBLE resides?

 

Regards,

rdma_fresh1234

0 Kudos
Sazirah
Employee
663 Views

Hi rdma_fresh1234,


Greetings.


Thank you for patiently waiting for our update.


Regarding your current query which is "how does the NIC know the physical memory address where the PBLE resides", we would suggest you to check 'Section 11.4.1.4 of E810 datasheet'. Hopefully it answers your query. If you still have any other questions, please let us know.

 

https://www.intel.com/content/www/us/en/content-details/613875/intel-ethernet-controller-e810-datasheet.html?DocID=613875


Regards,

Sazzy_Intel

Intel Customer Support Technician


0 Kudos
Shankith
Employee
508 Views

Hello rdma_fresh1234,,

 

Greetings for the day!

 

We are following up to check if you were able to find the information we provided.


Kindly revert back to us and let us know if you need any further assistance or if we can go ahead and close this thread.

 

Regards,

Shankith K P

​​​​​​​Intel Customer Support Technician


0 Kudos
rdma_fresh1234
Beginner
501 Views

Hi Shankith,

 

Greetings.

 

Based on Sazirah’s suggestion, I reviewed the relevant sections of the documentation. During initialization, I indeed observed that the driver performs a large number of Update PE SDs operations with the NIC via CQP commands. From the data, it appears that the PBLE address is derived by applying an offset to the SD address when the Index of the HMC Segment Descriptor equals 0x05c.

My current questions are:

1)During the numerous Update PE SDs operations in the initialization phase, how does the NIC determine which SD entry should be used as the base address for the PBLE?

2)How does the NIC obtain the offset required to derive the PBLE address from the SD address?

 

Regards,

rdma_fresh1234

 

 

0 Kudos
Shankith
Employee
496 Views

Hello rdma_fresh1234,


Thank you for writing back.


We will check this internally and we will get back to you with an update.


Thank you for your patience.


Regards,

Shankith K P

​​​​​​​Intel Customer Support Technician


0 Kudos
Reply