Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
1,027 Views

Questions about the imissed errors when recieving packets by DPDK using bonding PMD.

I used DPDK bonding PMD (rte_eth_bond_api) for receiving packets from a bonding port bonded by four 82599EB 10 Gbps ports and then sent them to the network through another bonding port.

when the traffic reached 20Gbps(about 5 Gbps for each physical port), I found some packets are dropped by HW everytime because of no mbuf in the rx rings which is called as imissed errors in struct rte_eth_stats.

I use 4 cores for RSS, set the nb_desc to the max length and also use vPMD for high performace, but it is no use.

How can I avoid imissed errors? Could the DPDK bonding driver has influence on the performance?

Thanks.

0 Kudos
1 Reply
Highlighted
Moderator
65 Views

Hello lb ,

Thank you for contacting the Intel Embedded Community,

The information that may help you is stated as a reference at the http://dpdk.org/doc/api-2.2/structrte__eth__stats.html DPDK rte_eth_stats Struct Reference , http://dpdk.org/ml/archives/dev/2014-June/003331.html [dpdk-dev] [PATCH v2] ethdev: add Rx error counters for missed, badcrc and badlen packets , and http://dpdk.org/dev/patchwork/patch/11390/ [dpdk-dev] ethdev: don't count missed packets in erroneous packets counter - Patchwork .

We hope that this information is useful to you.

Best Regards,

Carlos_A.

0 Kudos