Ethernet Products
Determine ramifications of Intel® Ethernet products and technologies
5301 Discussions

Can DPDK AF_XDP PMD support macvlan driver in container?

nine9
Beginner
435 Views

 

dpdk-testpmd with AF_XDP PMD can't work on p1p1 (macvlan) interface, but can work on eth0 (veth) interface.

And is there a method to enable AF_XDP PMD to work in XDP SKB mode?

===============can't work on p1p1 (macvlan) interface====================

5p8j4:/tmp # ./dpdk-testpmd --log-level=pmd.net.af_xdp:debug --no-huge --no-pci --no-telemetry --vdev net_af_xdp,iface=p1p1 -- --total-num-mbufs 8192
EAL: Detected CPU lcores: 40
EAL: Detected NUMA nodes: 1
EAL: Static memory layout is selected, amount of reserved memory can be adjusted with -m or --socket-mem
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp
init_internals(): Zero copy between umem and mbuf enabled.
testpmd: create a new mbuf pool <mb_pool_0>: n=8192, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.

Configuring Port 0 (socket 0)
eth_rx_queue_setup(): Set up rx queue, rx queue id: 0, xsk queue id: 0
libbpf: elf: skipping unrecognized data section(8) .xdp_run_config
libbpf: elf: skipping unrecognized data section(9) xdp_metadata
libbpf: elf: skipping unrecognized data section(7) xdp_metadata
libbpf: prog 'xdp_pass': BPF program load failed: Invalid argument
libbpf: prog 'xdp_pass': failed to load: -22
libbpf: failed to load object '/usr/lib64/bpf/xdp-dispatcher.o'
libbpf: elf: skipping unrecognized data section(7) xdp_metadata
libbpf: elf: skipping unrecognized data section(7) xdp_metadata
libbpf: elf: skipping unrecognized data section(7) xdp_metadata
libbpf: Kernel error message: Underlying driver does not support XDP in native mode
libxdp: Error attaching XDP program to ifindex 5: Operation not supported
libxdp: XDP mode not supported; try using SKB mode
xsk_configure(): Failed to create xsk socket.
eth_rx_queue_setup(): Failed to configure xdp socket
Fail to configure port 0 rx queues
rte_pmd_af_xdp_remove(): Removing AF_XDP ethdev on numa socket 0
eth_dev_close(): Closing AF_XDP ethdev on numa socket 0
Port 0 is closed
EAL: Error - exiting with code: 1
Cause: Start ports failed
EAL: Already called cleanup

===============work on eth0 (veth) interface====================

5p8j4:/tmp # ./dpdk-testpmd --log-level=pmd.net.af_xdp:debug --no-huge --no-pci --no-telemetry --vdev net_af_xdp,iface=eth0 -- --total-num-mbufs 8192
EAL: Detected CPU lcores: 40
EAL: Detected NUMA nodes: 1
EAL: Static memory layout is selected, amount of reserved memory can be adjusted with -m or --socket-mem
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp
init_internals(): Zero copy between umem and mbuf enabled.
testpmd: create a new mbuf pool <mb_pool_0>: n=8192, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.

Configuring Port 0 (socket 0)
eth_rx_queue_setup(): Set up rx queue, rx queue id: 0, xsk queue id: 0
libbpf: elf: skipping unrecognized data section(8) .xdp_run_config
libbpf: elf: skipping unrecognized data section(9) xdp_metadata
libbpf: elf: skipping unrecognized data section(7) xdp_metadata
libbpf: prog 'xdp_pass': BPF program load failed: Invalid argument
libbpf: prog 'xdp_pass': failed to load: -22
libbpf: failed to load object '/usr/lib64/bpf/xdp-dispatcher.o'
libbpf: elf: skipping unrecognized data section(7) xdp_metadata
libbpf: elf: skipping unrecognized data section(7) xdp_metadata
libbpf: elf: skipping unrecognized data section(7) xdp_metadata
configure_preferred_busy_poll(): Busy polling budget set to: 64
Port 0: 42:5F:27:A2:63:BA
Checking link statuses...
Done
No commandline core given, start packet forwarding
io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 1 streams:
RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

io packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=1
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=0 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=0 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
Press enter to exit

Telling cores to stop...
Waiting for lcores to finish...

---------------------- Forward statistics for port 0 ----------------------
RX-packets: 14 RX-dropped: 0 RX-total: 14
TX-packets: 14 TX-dropped: 0 TX-total: 14
----------------------------------------------------------------------------

+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 14 RX-dropped: 0 RX-total: 14
TX-packets: 14 TX-dropped: 0 TX-total: 14
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.

Stopping port 0...
Stopping ports...
Done

Shutting down port 0...
Closing ports...
eth_dev_close(): Closing AF_XDP ethdev on numa socket 0
Port 0 is closed
Done

Bye...
rte_pmd_af_xdp_remove(): Removing AF_XDP ethdev on numa socket 0

=================================append test environment====================

on workernode:
=================
worker-pool1-1:/home/test # nsenter -t 127962 -n
Directory: /home/test
Mon Oct 14 03:33:00 CEST 2024
worker-pool1-np5fr395-pccc-tool-43-1:/home/test # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0@if108: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 2120 qdisc noqueue state UP group default
link/ether 42:5f:27:a2:63:ba brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.96.160/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::405f:27ff:fea2:63ba/64 scope link
valid_lft forever preferred_lft forever
5: p1p1@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 7e:c5:53:73:95:5e brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::7cc5:53ff:fe73:955e/64 scope link
valid_lft forever preferred_lft forever
worker-pool1-1:/home/test # ethtool -i eth0
driver: veth
version: 1.0
firmware-version:
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
worker-pool1-1:/home/test # ethtool -i eth1
Cannot get driver information: No such device
worker-pool1-1:/home/test # ethtool -i p1p1
driver: macvlan
version: 0.1
firmware-version:
expansion-rom-version:
bus-info:
supports-statistics: no
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
worker-pool1-1:/home/test #
==============================
in container:
============
5p8j4:/tmp # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0@if108: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 2120 qdisc noqueue state UP group default
link/ether 42:5f:27:a2:63:ba brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.96.160/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::405f:27ff:fea2:63ba/64 scope link
valid_lft forever preferred_lft forever
5: p1p1@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 7e:c5:53:73:95:5e brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::7cc5:53ff:fe73:955e/64 scope link
valid_lft forever preferred_lft forever
=========================================================================

0 Kudos
3 Replies
IntelSupport
Community Manager
410 Views

Hi nine9.


Greetings for the day!


Thank you for contacting Intel Customer Support with your inquiry.

Could you please provide me with the details of ethernet product you are referring to.

Or snapshot of the ethernet card which will be helpful for us to proceed further on this case.


Thanks for choosing intel


Regards

Jerome


0 Kudos
nine9
Beginner
392 Views

Thanks Jerome,

The macvlan is on workernode device "00:04.0 Ethernet controller: Red Hat, Inc. Virtio network device"

==========================attached more details=====================================

worker-pool1-1:/home/test # nsenter -t 127962 -n
Directory: /home/test
Tue Oct 15 03:21:20 CEST 2024
worker-pool1-1:/home/test # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0@if108: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 2120 xdpgeneric/id:637 qdisc noqueue state UP group default
link/ether 42:5f:27:a2:63:ba brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.96.160/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::405f:27ff:fea2:63ba/64 scope link
valid_lft forever preferred_lft forever
5: p1p1@if3: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 7e:c5:53:73:95:5e brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::7cc5:53ff:fe73:955e/64 scope link
valid_lft forever preferred_lft forever
worker-pool1-1:/home/test # ethtool -i p1p1
driver: macvlan
version: 0.1
firmware-version:
expansion-rom-version:
bus-info:
supports-statistics: no
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
worker-pool1-1:/home/test #
worker-pool1-1:/home/test # exit
logout
worker-pool1-1:/home/test # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 2140 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:1a:aa:b2 brd ff:ff:ff:ff:ff:ff
altname enp0s3
altname ens3
inet 10.0.10.6/24 brd 10.0.10.255 scope global eth0
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 2140 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:ab:33:79 brd ff:ff:ff:ff:ff:ff
altname enp0s4
altname ens4
inet 172.16.1.168/26 brd 172.16.1.191 scope global eth1
valid_lft forever preferred_lft forever
worker-pool1-1:/home/test # ethtool -i eth1
driver: virtio_net
version: 1.0.0
firmware-version:
expansion-rom-version:
bus-info: 0000:00:04.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
worker-pool1-1:/home/test # lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01)
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 VGA compatible controller: Cirrus Logic GD 5446
00:03.0 Ethernet controller: Red Hat, Inc. Virtio network device
00:04.0 Ethernet controller: Red Hat, Inc. Virtio network device

0 Kudos
Devi_Intel
Employee
378 Views

Hi nine9.

Greetings


Thank you for contacting Ethernet Community support. For further support with DPDK, Kindly refer to DPDK.org or open an IPS case at IPS Support

After registration, you will be able to open a case on the Intel® Premier Support (IPS) platform and your request will be handled by one of our engineers as soon as possible.


Thank you & Best Regards

Devi


0 Kudos
Reply