Embedded Connectivity
Intel network controllers, Firmware, and drivers support systems
845 Discussions

TX and RX descriptors of DPDK

GMoor5
Beginner
8,243 Views

Hi,

Can anyone clarify me what is the purpose of TX and RX descriptors in dpdk? Is there any relation between throughput and TX,Rx descriptor values ?

Thanks,

Ganesh

0 Kudos
7 Replies
Muthurajan_J_Intel
6,019 Views

Hi,

That is the mechanism you communicate to the NIC hardware - e.g., in the case of RX, for passing pointers to empty buffers that you pass to NIC and getting pointers for filled buffers that NIC returns. In the case of Tx, for passing pointers to buffers that need to be transmitted that you pass to NIC and getting pointers for transmitted empty buffers that NIC returns.

Please refer http://networkbuilders.intel.com/docs/Network_Builders_RA_vBRAS_Final.pdf http://networkbuilders.intel.com/docs/Network_Builders_RA_vBRAS_Final.pdf

Figure 9 shows very nicely circular buffer of packet descriptors with head and tail.

For more details, please refer 82599 data sheet https://www-ssl.intel.com/content/www/us/en/ethernet-controllers/82599-10-gbe-controller-datasheet.html https://www-ssl.intel.com/content/www/us/en/ethernet-controllers/82599-10-gbe-controller-datasheet.html

..

(ps: Have you registered in www.dpdk.org ? Vibrant community discussions you will like and you will benefit from. Kindly find http://www.dpdk.org/ml/listinfo/dev dev Info Page to register

0 Kudos
GMoor5
Beginner
6,019 Views

Thanks for your quick response Muthuraj. In L2fwd application, by increasing Tx descriptor value to 1024, I am seeing good performance. But in case of 64 byte packets, there are many packet drops. By increasing Descriptor value, the size of the ring will be increased.. right? So I can see good performance.. right ?Then why I am seeing much packet drops in case of 64 byte packet ? Is there any theoretical reason behind this ?

Thanks,

Ganesh

0 Kudos
Muthurajan_J_Intel
6,019 Views

With 64 byte packet size, the packets arrive much faster compared to bigger packet size. So, your code length have less budget to receive, process and transmit (in case you are doing run to completion) with smaller packet size. You want to have your system at its best configuration so that you are most efficient in handling.

Please check on the following.

1) Is it dual socket or single socket system?

2) Are the BIOS setting optimum? - i.e., a) NUMA enabled? in case of dual socket system

b) ACPI power states DISABLED?

3) Are you using huge page size? What size?

4) What is your command line options in terms of port mask? - to make sure you are using cores and NICs in close proximity - in case of multi socket system.

5) Can you check if you are using the parameters of L2fwd application - e.g., threshold values, buffer sizes are optimal setting in L2fwd application.

BTW, this excellent paper discusses in detail in dual socket system the mechanisms that can introduce inefficiencies -

http://networkbuilders.intel.com/docs/Network_Builders_RA_vBRAS_Final.pdf http://networkbuilders.intel.com/docs/Network_Builders_RA_vBRAS_Final.pdf

0 Kudos
GMoor5
Beginner
6,019 Views

Muthuraj,

Thanks for the document. Please find the answers inline.

1) Is it dual socket or single socket system?

Dual socket

2) Are the BIOS setting optimum? - i.e., a) NUMA enabled? in case of dual socket system ==> Enabled

b) ACPI power states DISABLED? ==> enabled

3) Are you using huge page size? What size? ==> Yes, I am using 1G hugepages. 32G per socket.

4) What is your command line options in terms of port mask? - to make sure you are using cores and NICs in close proximity - in case of multi socket system.==> Using cores and NIC from socket 0.

5) Can you check if you are using the parameters of L2fwd application - e.g., threshold values, buffer sizes are optimal setting in L2fwd application. ==> In DPDK-1.5.1 l2fwd, the only change I have done is increased tx descriptor value to 1024.

Thanks,

Ganesamoorthy

0 Kudos
Muthurajan_J_Intel
6,019 Views

How you have enabled NUMA in BIOS? It is good that you have enabled NUMA in BIOS.

I see you have ACPI power state ENABLED. Please kindly DISABLE that.

Regarding the point 4, you want to use "CAT PROC CPUINFO" and write down how linux in your system is numbering the cores. And with that numbering scheme, see how you are using in your coremask in command line thereby you are using the cores of the same socket.

Thanks,

0 Kudos
Muthurajan_J_Intel
6,019 Views

In addition,

What is the processor type you are using?

Is it Sandybridge and above?

Because if it is previous generations before Sandybridge, then there is no DDIO and DDIO interface is very performance improving. So, you want to please ensure you have processor generation which is Sandybridge and latest.

Also, Are you sure you are plugging the NIC into the PCI slot that is directly coming from the CPU? Sometimes there are PCI slots that come from south bridge. You want to please ensure that you are plugging into the one that has CPU connected directly to.

How you have enabled NUMA in BIOS? It is good that you have enabled NUMA in BIOS.

I see you have ACPI power state ENABLED. Please kindly DISABLE that.

Regarding the point 4, you want to use "CAT PROC CPUINFO" and write down how linux in your system is numbering the cores. And with that numbering scheme, see how you are using in your coremask in command line thereby you are using the cores of the same socket.

0 Kudos
Natalie_Z_Intel
Employee
6,019 Views

Ganesh, you may also find this document helpful: https://www-ssl.intel.com/content/www/us/en/intelligent-systems/intel-technology/intel-dpdk-release-notes.html?wapkw=326001 Intel® Data Plane Development Kit: Release Notes. The Frequently Asked Questions (FAQ) starts on page 55, Chapter 8.

Have a great day! LynnZ. Thanks muthurajanjayakumar for your response, too!

0 Kudos
Reply