Ethernet Products
Determine ramifications of Intel® Ethernet products and technologies
4808 Discussions

E810-C iWarp performance

vsi
Novice
2,394 Views

Hello,

I am experimenting with the performance of iWarp on E810-C and I could use some insights to understand the following scaling scenario:

 

* If i run a perftest tool such ib_send_bw over one QP I am getting around 49Gb/s of throughput.

ib_send_bw -d rdmap2s0f0 -R -D 10 --report_gbits -q 1

ib_send_bw -d rdmap2s0f0 -R $server_ip --report_gbits -q 1 -D 10

 

* With two QPs this bumps up to around 88 Gb/s and with three up to 91 Gb/s (this is when using one server, one client with the -q -flag of ib_send_bw)

ib_send_bw -d rdmap2s0f0 -R -D 10 --report_gbits -q $number_of_qps

ib_send_bw -d rdmap2s0f0 -R $server_ip --report_gbits -q $number_of_qps -D 10

 

* Similar scaling happens if I initiate several clients against one server, each with 1 QP (on different ports -p -flag of ib_send_bw etc.)

 

I am using the following software:

ice 1.4.11

irdma 1.4.24

ib_send_bw 5.92

Centos 7.9.2009

 

Is this ~50Gb/s somehow the expected throughput per QP or is there a way to alter this behaviour?

I am asking as this is somewhat significant contrast to a RoCE-only cards that I have in my disposal as with those I can reach about 80Gb/s with single QP - albeit they tend to produce less scaling as QPs are added.

 

Any thoughts on the matter are highly appreciated.

 

--

Vesa

0 Kudos
1 Solution
vsi
Novice
2,207 Views

Hello,

 

Thank you for your reply.

I take it that the ~50Gbit/s is simply the expected throughput with a single queue pair with this HW so there is maybe nothing else wrong on the setup that would explain the performance I am seeing.

 

Thank you for your help in this matter.

 

--

Vesa

View solution in original post

0 Kudos
11 Replies
Caguicla_Intel
Moderator
2,368 Views

Hello Vesa,


Thank you for posting in Intel Ethernet Communities. 


Please provide the firmware version of the adapter as well PBA number for us to identify if you are using an Original Equipment Manufacturer(OEM) or retail version of Intel Ethernet Adapter. You may refer to the link below on where to find the PBA number. It consist of 6-3 digit number located at the last part of the serial number. 

Identify Your Intel® Network Adapter Model Using PBA Number

https://www.intel.com/content/www/us/en/support/articles/000007022/network-and-i-o/ethernet-products.html


Looking forward to your reply.


We will follow up after 3 business days in case we don't hear from you.


Best regards,

Crisselle C.

Intel® Customer Support


0 Kudos
vsi
Novice
2,351 Views

Hello,

Please find the information requested below:

Cards are retail, e.g. they didn't come with the machines but instead were purchased later separately for a specific purpose.

Firmware on all the five cards in my disposal appears to be 2.15 0x800049c3 1.2789.0 (ethtool -i $NIC | grep firmware-version)

I can inspect the cards physically next time in few days, but in the meantime maybe this is sufficient for the PBA/serial/part number:

From the lspci -vvvs $device -output:

[V1] Vendor specific: Intel(R) Ethernet Network Adapter E810-CQDA2
[PN] Part number: K91258-006
[SN] Serial number: B49691AAA018,B49691AAA090,B49691AAA038,B49691AAA0E8,B49691AAA168
[V2] Vendor specific: 4920
[RV] Reserved: checksum good, 1 byte(s) reserved

 

 

All the cards are of the same order/batch.

 

Thanks for looking into this.

 

--

Vesa

0 Kudos
vsi
Novice
2,343 Views

Hello,

As I noticed that there was a newer firmware available I upgraded couple of the NICs to firmware-version: 2.50 0x800077a6 1.2960.0

This however didn't seem to impact the single QP performance.

 

--

Vesa

0 Kudos
Caguicla_Intel
Moderator
2,323 Views

Hello Vesa,


Thank you for providing the requested information.  


Please allow us to further check on your query. We will get back to you as soon as possible but no later than 2-3 business days. 


Hoping for your kind patience. 


Best regards,

Crisselle C.

Intel® Customer Support


0 Kudos
Caguicla_Intel
Moderator
2,290 Views

Hello Vesa,

 

Apologies for the delay on this matter.

 

Please see below update from our higher level.

 

Will you be able to try this RDMA Driver Version: 1.5.2 (Latest) for the E810 and observe if it will help with the performance issue?

https://downloadcenter.intel.com/download/29751/Linux-RDMA-Driver-for-the-E810-and-X722-Intel-Ethernet-Controllers

 

We hope to hear from you soon. 

 

Should there be no response from you, I’ll make sure to reach out after 3 business days.

 

Best regards,

Crisselle C.

Intel® Customer Support

 

0 Kudos
vsi
Novice
2,268 Views

Hi,

 

Thanks for getting back to me.

 

To summarise the current driver / firmware situation:

Intel(R) Ethernet Network Adapter 2.80(2.50) 1592 00:002 - this is being reported by nvmupdate64e

irdma module is at 1.5.2 (from modinfo)

ice module is at 1.5.8 (from modinfo)

 

Performance reported by ib_send_bw is still the same with single QP. While not quite the same thing, I noticed that with ib_read_bw I get about 72 Gbit/s.

 

I've also verified that the CPU governor is set to performance and I've also used taskset to try to confirm that the correct cores are being used.

I've also checked the switch ports for any errors but I haven't caught any. Also, ethtool -S doesnt seem to report any error/drop counters increasing while running the benchmark. 

So far none of the driver/firmware upgrades have produced any changes in the performance observed using ib_send_bw. 

0 Kudos
Caguicla_Intel
Moderator
2,264 Views

Hello Vesa,


Thank you for the swift reply and effort in trying out the recommendation. 


Please allow us to re-escalate this request to our higher level for further investigation. Rest assured that we will give you an update as soon as possible but no later than 2-3 business days. 


Thank you for your kind understanding. 


Best regards,

Crisselle C.

Intel® Customer Support


0 Kudos
Caguicla_Intel
Moderator
2,233 Views

Hello Vesa,


Good day!


Please be informed that this request is still escalated to our engineers. Rest assured that we will give you an update as soon as possible but no later than 2-4 business days. 


Thank you for your kind patience.


Best regards,

Crisselle C.

Intel® Customer Support


0 Kudos
Caguicla_Intel
Moderator
2,211 Views

Hello Vesa,


Apologies for the delay on this matter. 


Please see below update from our engineering team and feel free to let us know if you have additional questions and clarifications. 


Columbiaville is designed to deliver best performance using multiple QPs and unfortunately we cannot reach max bw with a single QP. This is the case whether the protocol is set to iWARP or RoCEv2 mode on our E810s.


We’ve documented this in the irdma README as well.

-----------

Performance

-----------

RDMA performance may be optimized by adjusting system, application, or driver

settings.


- Flow control is required for best performance in RoCEv2 mode and is optional

 in iWARP mode. Both link-level flow control (LFC) and priority flow control

 (PFC) are supported, but PFC is recommended. See the "Flow Control Settings"

section of this document for configuration details.


- For bandwidth applications, multiple queue pairs (QPs) are required for best

 performance. For example, in the perftest suite, use "-q 8" on the command

 line to run with 8 QP.


Awaiting to your reply.


We will make sure to reach out after 3 business days in case we don't hear from you.


Best regards,

Crisselle C.

Intel® Customer Support


0 Kudos
vsi
Novice
2,208 Views

Hello,

 

Thank you for your reply.

I take it that the ~50Gbit/s is simply the expected throughput with a single queue pair with this HW so there is maybe nothing else wrong on the setup that would explain the performance I am seeing.

 

Thank you for your help in this matter.

 

--

Vesa

0 Kudos
Caguicla_Intel
Moderator
2,193 Views

Hello Vesa,


Thank you for the reply. 


Since you already marked this thread as closed, please be informed that we will now close this request. Just feel free to post a new question if you may have any other inquiry in the future as this thread will no longer be monitored.


May you have an amazing day and stay safe!


Best regards,

Crisselle C.

Intel® Customer Support


0 Kudos
Reply