Ethernet Products
Determine ramifications of Intel® Ethernet products and technologies
5301 Discussions

testpmd: No probed ethernet devices when running testpmd with Intel E810-XXVDA2

read_nic2
Beginner
5,410 Views
 

Hi,

 

#####

I have submitted a question for below question on IPS, but the ticket was not handled for almost three weeks, so I posted here, hope someone could help.

#####

 

I am running the throughput testing using testpmd with Intel E810-XXVDA2.

When running the testpmd after compilation of the DPDK, the testpmd returned error "testpmd: No probed ethernet devices". Can you please help what caused the error and if any settings go wrong? If more info needed, please also let me know.

Thanks.

Below are setup details and running log:

# ./testpmd -w b1:00.0 -w b1:00.1 -n 16 --socket-mem=0,2048 -c 0xff00000000 -- --burst=64 -i --rxd=2048 --txd=2048 --mbcache=512 --rxq=2 --txq=2 --nb-cores=7 -a --socket-num=0 --rss --disable-crc-strip
EAL: Detected 128 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: Probing VFIO support...
EAL: PCI device 0000:b1:00.0 on NUMA socket 1
EAL: probe driver: 8086:159b net_ice
EAL: PCI device 0000:b1:00.1 on NUMA socket 1
EAL: probe driver: 8086:159b net_ice
testpmd: No probed ethernet devices
Interactive-mode selected
Fail: input rxq (2) can't be greater than max_rx_queues (0) of port 0
EAL: Error - exiting with code: 1
Cause: rxq 2 invalid - must be >= 0 && <= 0

 

DPDK version: dpdk-stable-19.11.3
Intel NIC: Ethernet Controller E810-XXV for SFP
NIC driver and FW version: 
driver: ice
version: 1.3.2
firmware-version: 2.30 0x80005d22 1.2877.0
expansion-rom-version: 
bus-info: 0000:b1:00.0
OS info: CentOS Linux release 7.9.2009 (Core) - Cent OS 3.10.0-1160.el7.x86_64 

hugepages settings:

[root@localhost ~]# grep Huge /proc/meminfo
AnonHugePages: 6144 kB
HugePages_Total: 64
HugePages_Free: 64
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 1048576 kB
[root@localhost ~]# cat /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="isolcpus=0-23,32-55 intel_idle.max_cstate=0 processor.max_cstate=0 intel_pstate=disable nohz_full=0-31,32-55 rcu_nocbs=0-31,32-55 rcu_nocb_poll default_hugepagesz=1G hugepagesz=1G hugepages=64 audit=0 nosoftlockup crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet"

NIC on node (ens34fx is the NIC

[root@localhost ~]# cat /sys/class/net/ens34f1/device/numa_node
1
[root@localhost ~]# cat /sys/class/net/ens34f0/device/numa_node
1

CPU cores on nodes:

# numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
node 0 size: 128316 MB
node 0 free: 84756 MB
node 1 cpus: 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127
node 1 size: 128998 MB
node 1 free: 92206 MB
node distances:
node 0 1
0: 10 20
1: 20 10

DPDK compilation details:

# make -j install T=x86_64-native-linuxapp-gcc DESTDIR=install

# export DPDK_DIR=$PWD

# export DPDK_BUILD=$DPDK_DIR/x86_64-native-linuxapp-gcc

Settings in common_base file:

# Compile burst-oriented ICE PMD driver
#
CONFIG_RTE_LIBRTE_ICE_PMD=y
CONFIG_RTE_LIBRTE_ICE_DEBUG_RX=n
CONFIG_RTE_LIBRTE_ICE_DEBUG_TX=n
CONFIG_RTE_LIBRTE_ICE_DEBUG_TX_FREE=n
CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC=n

 

########### further information - trial with DPDK 20.11 ###########

OS version(kernel):
# cat /etc/redhat-release
CentOS Linux release 8.3.2011
# uname -a
Linux localhost.localdomain 4.18.0-240.el8.x86_64 #1 SMP Fri Sep 25 19:48:47 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
NIC drivers:
# ethtool -i ens34f0
driver: ice
version: 1.3.2
firmware-version: 2.30 0x80005d22 1.2877.0
expansion-rom-version:
bus-info: 0000:b1:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes
DPDK version:
# pwd
/root/dpdk-20.11/build/app


Log after DPDK compilation with drivers enabled info 
Message:
===============
Drivers Enabled
===============

common:
        cpt, dpaax, iavf, octeontx, octeontx2, sfc_efx, mlx5, qat,

bus:
        dpaa, fslmc, ifpga, pci, vdev, vmbus,
mempool:
        bucket, dpaa, dpaa2, octeontx, octeontx2, ring, stack,
net:
        af_packet, ark, atlantic, avp, axgbe, bond, bnx2x, bnxt,
        cxgbe, dpaa, dpaa2, e1000, ena, enetc, enic, failsafe,
        fm10k, i40e, hinic, hns3, iavf, ice, igc, ixgbe,
        kni, liquidio, memif, mlx4, mlx5, netvsc, nfp, null,
        octeontx, octeontx2, pfe, qede, ring, sfc, softnic, tap,
        thunderx, txgbe, vdev_netvsc, vhost, virtio, vmxnet3,
raw:
        dpaa2_cmdif, dpaa2_qdma, ioat, ntb, octeontx2_dma, octeontx2_ep, skeleton,
crypto:
        bcmfs, caam_jr, dpaa_sec, dpaa2_sec, nitrox, null, octeontx, octeontx2,
        scheduler, virtio,
compress:
        octeontx, zlib,
regex:
        mlx5, octeontx2,
vdpa:
        ifc, mlx5,
event:
        dlb, dlb2, dpaa, dpaa2, octeontx2, opdl, skeleton, sw,
        dsw, octeontx,
baseband:
        null, turbo_sw, fpga_lte_fec, fpga_5gnr_fec, acc100,
Intel NIC cannot be detected:
# ./dpdk-testpmd -a 0000:b1:00.0 -a 0000:b1:00.1 -n 16 --socket-mem=0,2048 -c 0xff -- --burst=64 -i --rxd=2048 --txd=2048 --mbcache=512 --nb-cores=7 -a --socket-num=0 --rss --disable-crc-strip
EAL: Detected 128 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-2048kB
EAL: Probing VFIO support...
EAL: No legacy callbacks, legacy socket not created
testpmd: No probed ethernet devices
Interactive-mode selected
Auto-start selected
testpmd: create a new mbuf pool <mb_pool_0>: n=262144, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Done
Start automatic packet forwarding
io packet forwarding - ports=0 - cores=0 - streams=0 - NUMA support enabled, MP allocation mode: native

  io packet forwarding packets/burst=64
  nb forwarding cores=7 - nb forwarding ports=0
testpmd>

0 Kudos
8 Replies
Caguicla_Intel
Moderator
5,401 Views

Hello read_nic2,


Thank you for posting in Intel Ethernet Communities. 


Before we further check if there are other options to contact IPS, can you share if you already made a follow up resubmitted your request to Intel® Premier Support site? 

https://www.intel.com/content/www/us/en/design/support/ips/training/access-and-login.html


We hope to hear from you soon. 


Should there be no response, I’ll make sure to reach out after 3 business days.


Best regards,

Crisselle C.

Intel® Customer Support


0 Kudos
read_nic2
Beginner
5,391 Views

hi,

yes, I already left comments in my submitted case but no reply received.

 

0 Kudos
Caguicla_Intel
Moderator
5,389 Views

Hello read_nic2,


Appreciate your swift response.


Is there any reference/case number that you can share so we can further check on it?


We hope to hear from you soon. 


Should there be no response, I’ll make sure to reach out after 3 business days.


Best regards,

Crisselle C.

Intel® Customer Support


0 Kudos
read_nic2
Beginner
5,387 Views

hi,

 

the case # is 00598027.

 

thanks.

 

0 Kudos
Caguicla_Intel
Moderator
5,382 Views

Hello read_nic2,  


Thank you for the effort in providing the requested information.


Please allow us to further check on what we can be of help on your request. We'd also like to set your expectations that IPS is a different team but we will do our best to follow up and check the status of your case. Rest assured that we will get back to you as soon as possible but no later than 2-3 business days.


Hoping for your kind patience.


Best regards,

Crisselle C.

Intel® Customer Support 


0 Kudos
Caguicla_Intel
Moderator
5,355 Views

Hello read_nic2,  


Thank you for the patience on this matter.


We are glad to inform you that we are able to make a follow-up to the support group who handle IPS # 00598027. This IPS is assigned to another engineer for further checking and unfortunately, we do not have direct access about your IPS case. With this, we will leave this request open and wait for your confirmation until you received any update from IPS support. 


Awaiting to hear from you.


We will follow up after 3 business days in case we don't receive a reply. Feel free to let us know if there are any preferred date to reach you so that we can revert accordingly.


Best regards,

Crisselle C.

Intel® Customer Support


0 Kudos
read_nic2
Beginner
5,338 Views

Hi Crisselle C.,

 

thanks. Yes, I have just now recevied the reply from the engineer that was assigned to this ticket. Thanks for your quick help and support.

Br.

 

 

0 Kudos
Caguicla_Intel
Moderator
5,332 Views

Hello read_nic2,  


You're most welcome, it's been my pleasure helping you out!


Please be informed that we will now proceed closing this request. Thank you for your time and cooperation throughout the process. Just feel free to post a new question if you may have any other inquiry in the future as this thread will no longer be monitored.


May you have an amazing weekend ahead!


Best regards,

Crisselle C.

Intel® Customer Support


0 Kudos
Reply