Ethernet Products
Determine ramifications of Intel® Ethernet products and technologies
4862 Discussions

Should XXV710 aggregate inter-VF throughput be limited to 25Gbps or VEB/PCI bandwidth

User1575447903964661
1,289 Views

Hi,

I have a XXV710-DA2 on a host. On which I am running multiple VMs with SRIOV. I am assigning two VFs from the same port, to two differnt VMs. The VMs are using DPDK.

 

I was hoping that the inter-VM/VF throughput would be switched in the VEB and thus would be far greater than 25Gbps (upto PCI bandwidth).

 

But the total throughput (across all VMs) is limited to 25Gbps aggregate. For e.g. I can only do 12.5Gbps bi-directional throughput between two VMs. So VM1 can only TX 12.5 (which is RX for VM2) and RX 12.5 (which is TX from VM1). If I move one VM to another host, they can do 25 Gbps bi-dir, no issue.

 

I have enabled loopback on all the VF interfaces, on the host

# cat /sys/class/net/p3p2/device/sriov/3/loopback

on

# cat /sys/class/net/p3p2/device/sriov/2/loopback

on

 

Firmware version is 7.0

 

Am I missing something?

 

Thanks

Pratik

 

 

0 Kudos
1 Solution
AlfredoS_Intel
Moderator
1,057 Views

Hello User15754479039646610788,

Thank you for patiently waiting for our update.

Please check on figure 15 of the document that you referenced; there is a note there that states “All numbers used for the KPIs are on the Transmit side of the VNF VM only”. That shows that each of the 8 VMs on a 40Gb interface is getting a 4.55Gb transmit speed, which combines to 36.45Gb; due to that reason, it is normal if you are getting 12.5 Gb of transmit on each of the 2 VMs. If you increase it to 5 VMs, then it would be 5Gb for each.

Additionally, if you put 1 VF on each port, then you may get closer to the 25Gb per VM; however, there may be additional overhead for the communication between the two ports.

Best Regards,

Alfred S

Intel® Customer Support

A Contingent Worker at Intel 

View solution in original post

0 Kudos
12 Replies
Mike_Intel
Moderator
1,057 Views

Hello User15754479039646610788,

 

Thank you for posting in Intel Ethernet Communities. 

 

Regarding your inquiry about DPDK, we have a different team who supports DPDK.

Please open the link below to log your inquiry.

 

https://www.intel.com/content/www/us/en/design/support/ips/training/welcome.html

Click the "Login and Access" to contact the DPDK support 

 

If you have questions, please let us know.

 

Best regards,

Michael L.

Intel Customer Support Technicians

A Contingent Worker at Intel

0 Kudos
User1575447903964661
1,057 Views

Thank you for the response.

 

When I try to use the link, it says "You don't have Access to IPS. Please contact the FAE to get access."

 

Nevertheless, my question is not about DPDK. Lets assume that I am using normal kernel driver inside the VM. My question is about whether VF-VF throughput should be limited by the port capacity. As per the link below, that should not be the case. But I am seeing that it is.

https://www.intel.com/content/dam/www/public/us/en/documents/technology-briefs/sr-iov-nfv-tech-brief.pdf

 

Thanks

0 Kudos
Mike_Intel
Moderator
1,057 Views

Hello User15754479039646610788,

 

If you are having login or access issue, please try to contact their support below:

 

https://www.intel.com/content/www/us/en/design/support/ips/training/access-and-login.html

 

You may also try to email the support on this email address: smg.customer.support@intel.com

 

And also, for me to further check. Kindly provide the following details.

 

  1. What is the model of your system?
  2. Are you using an on-board Ethernet card?
  3. Can you also show us any data about your inquiry?

 

If you have questions, please let us know.

 

Best regards,

Michael L.

Intel Customer Support Technicians

A Contingent Worker at Intel

0 Kudos
User1575447903964661
1,057 Views

Hi,

Thank you for the followup.

 

1. My system is Intel(R) Xeon(R) Gold 6126 CPU @ 2.60GHz, running CentOS7

 

2. XXV710 is in a x8 PCI slot.

 lspci -s d8:00.0 -vvv | grep LnkSt

LnkSta: Speed 8GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

 

3. As I indicated above, the VMs are using SRIOV. When VM1 data leaves the NIC port, (say to VM2 on another host), I see 25 Gbps full duplex (50 Gbps IO). But when I move VM2 to the same host, my throughput falls to 12.5 Gbps. My basic question is that do I need to do something special to turn on NIC based VF switching (VEB)? The host has enough compute capacity.

 

Thanks

Pratik

0 Kudos
Mike_Intel
Moderator
1,057 Views

Hello User15754479039646610788,

 

Thank you for providing the information that I requested. I need to further check if there is a settings for that.

Please give me 2 to 3 days to provide an update.

 

If you have questions, please let us know.

 

Best regards,

Michael L.

Intel Customer Support Technicians

A Contingent Worker at Intel

 

0 Kudos
Mike_Intel
Moderator
1,057 Views

Hello User15754479039646610788,

 

Upon further checking  if you are using SR-IOV, the inter-VF throughput is limited to 25Gbps, you may explore the implementation of DPDK Vhost for higher throughput.

 

For DPDK resources, please refer to the following sites for details and assistance:

https://doc.dpdk.org/guides/sample_app_ug/vhost.html

software forum - https://software.intel.com/en-us/forums/networking

 

If you have questions, please let us know.

 

Best regards,

Michael L.

Intel Customer Support Technicians

A Contingent Worker at Intel

0 Kudos
User1575447903964661
1,057 Views

Thank you for the info.

 

Can I request you to pl. double check? Because it does not seem to be aligned to the following white paper:

https://www.intel.com/content/dam/www/public/us/en/documents/technology-briefs/sr-iov-nfv-tech-brief.pdf

 

which says that inter-VF 2 VM throughput to exceed line rate for XL710 (page 40, SRIOV east-west 2VM is 44.44 Gbps). And the document says that this would be limited by PCI BW and NIC packet processing capacity. I don't think either applies to me w/ 2 VMs.

 

Thanks again

 

0 Kudos
AlfredoS_Intel
Moderator
1,057 Views

Hello User15754479039646610788,

Please allow us some time to re-check this. Please give us 2 to 3 days to provide you with an update.

Best Regards,

Alfred S

Intel® Customer Support

A Contingent Worker at Intel 

0 Kudos
AlfredoS_Intel
Moderator
1,058 Views

Hello User15754479039646610788,

Thank you for patiently waiting for our update.

Please check on figure 15 of the document that you referenced; there is a note there that states “All numbers used for the KPIs are on the Transmit side of the VNF VM only”. That shows that each of the 8 VMs on a 40Gb interface is getting a 4.55Gb transmit speed, which combines to 36.45Gb; due to that reason, it is normal if you are getting 12.5 Gb of transmit on each of the 2 VMs. If you increase it to 5 VMs, then it would be 5Gb for each.

Additionally, if you put 1 VF on each port, then you may get closer to the 25Gb per VM; however, there may be additional overhead for the communication between the two ports.

Best Regards,

Alfred S

Intel® Customer Support

A Contingent Worker at Intel 

0 Kudos
Mike_Intel
Moderator
1,057 Views

Hello User15754479039646610788,

 

I just want to check if you still have questions, or clarifications. Please let us know.

 

Best regards,

Michael L.

Intel Customer Support Technicians

A Contingent Worker at Intel

0 Kudos
Mike_Intel
Moderator
1,057 Views

Hello User15754479039646610788,

 

I am just sending another follow up if you still have questions or clarifications.

Since we have not heard back from you, I will close this inquiry now. 

 

If you need further assistance, please post a new question. 

 

Best regards,

Michael L.

Intel Customer Support Technicians

A Contingent Worker at Intel

0 Kudos
User1575447903964661
1,057 Views

Thank you for your response. I am all set.

 

Thanks

0 Kudos
Reply