Embedded Connectivity
Intel network controllers, Firmware, and drivers support systems
845 Discussions

i210 ARM performance issues in iNVM mode

NShah17
Novice
3,481 Views

Hello, we are using an i210 NIC on a Tegra K1 ARM platform.

Our reference testing has been done with a PCIe card (MPX-210D-G - Intel i210-AT Mini-PCI Express (Mini-PCIe) Gigabit LAN Module Commell) on a Jetson TK1 development board. This PCIe card has an external NVM flash chip and the device ID appears as 0x1531. In this situation, the NIC performs fine for file transfers, Internet access, and GigE vision streaming (our production use case).

eth0 Link encap:Ethernet HWaddr 00:03:1d:10:7d:ff

inet addr:169.254.2.2 Bcast:169.254.255.255 Mask:255.255.0.0

inet6 addr: fe80::203:1dff:fe10:7dff/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1

RX packets:747259 errors:0 dropped:0 overruns:0 frame:0

TX packets:1314075 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:2551469529 (2.5 GB) TX bytes:10075762394 (10.0 GB)

Memory:32200000-32300000

We have developed our own custom board with an embedded i210, but it is operating in iNVM mode and does not have external flash attached. The device ID enumerates as 0x157b. We were able to program the chip using eepromARMtool and we can use it for network file transfers, Internet access, etc. without difficulty. However when connecting to the GigE camera using GVSP, we find an unacceptably high packet drop rate. Note the drop/overrun packet count in ifconfig:

eth0 Link encap:Ethernet HWaddr 00:50:c2:c9:9a:fc

inet addr:169.254.2.2 Bcast:169.254.255.255 Mask:255.255.0.0

inet6 addr: fe80::250:c2ff:fec9:9afc/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1

RX packets:257864 errors:0 dropped:20960 overruns:20960 frame:0

TX packets:450 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:2308711001 (2.3 GB) TX bytes:29940 (29.9 KB)

We have attempted the following:

* Increase MTU size to 9000

* Edit sysctl variables net.core.{rmem_max, rmem_default, wmem_max, wmem_default} to maximum value (33554432)

* Update i210 driver to latest stable version 5.3.4.4

* Run iperf tests on NIC (average result in all cases = 900 Mbits/sec)

* Verifying reference design works as expected (Jetson TK1 + i210 mPCIe card + PointGrey GigE Camera)

Short of kernel and driver debugging we have exercised all our debugging tools. The only hardware difference between our custom and reference design is the (lack of) external NVM. I would like to know whether Intel has any information regarding how iNVM mode can affect performance in general, and also specifically on ARM Linux.

Thank you,

Neel

edit 2016-04-06 formatting

0 Kudos
12 Replies
CarlosAM_INTEL
Moderator
1,717 Views

Hello neelfirst,

Thank you for contacting the Intel Embedded Community.

In order to better understand this situation, we would like to address the following questions:

Could you please tell us the ports (SERDES, Copper, or others) of the working and affected designs that are used to determine this situation? Feel free to include block diagrams to provide a detailed answer

Could you please let us know the NVM images related to the functional and faulty designs?

Could you please clarify if the defective and worthless designs have the Ethernet controller on adding cards or integrated in the designs?

Could you please give us the complete part number of the affected Ethernet controller?

Thanks in advance for your help to solve this inconvenience.

Best Regards,

Carlos_A.

 

0 Kudos
NShah17
Novice
1,717 Views

Hi Carlos, thanks for your response.

In both cases we are using copper ports.

The working case uses an add-on PCIe card and the faulty case uses the WGI210ATSLJXQ part number embedded on board.

Do you mean I should provide the HEX file contents of the NVM images?

Thanks,

Neel

0 Kudos
CarlosAM_INTEL
Moderator
1,717 Views

Hello neelfirst,

 

Thanks for your reply.

Please let me paraphrase my question; could you please let us know the Flash images related to the functional and faulty designs?

By the way, could you please confirm if the affected implementation fulfills with the recommendations and guidelines stated in the http://www.intel.com/content/dam/www/public/us/en/documents/schematic/i210-at-i211-at-1g-base-t-reference-design-schematic.pdf Intel(R) Ethernet Controller I210-AT Reference Schematics?

Thanks again for your cooperation to solve this problem.

Best Regards,

Carlos_A.

0 Kudos
NShah17
Novice
1,717 Views

Hi Carlos, yes we have followed the reference schematic. Here is the relevant link to our schematic page. https://www.dropbox.com/s/m7shudgom4199pc/Page19-PEX-GIGE-LAN-PHY.pdf?dl=0 Dropbox - Page19-PEX-GIGE-LAN-PHY.pdf

I am having difficulty parsing what you mean about the Flash images. For our faulty board we used I210_Invm_Copper_NoAPM_v0.6.HEX modified with our MAC address. The working reference board has NVM attached. I can provide the raw output of the NVM but I have no further information than that.

Really I just want to know whether there are performance differences in iNVM mode that could be causing a high packet drop rate.

Thanks,

Neel

0 Kudos
CarlosAM_INTEL
Moderator
1,718 Views

Hello neelfirst,

Thanks for your update.

Based on the provided information, we would like to address the following questions:

Could you please tell us if your design was reviewed by Intel?

 

Could you please clarify when you stated "The only hardware difference between our custom and reference design is the (lack of) external NVM." is including to use the same magnetic parts?

If your design is the same as the reference design, could you please let us know if you have attempted to remove the flash part from the reference board and programming the iNVM onto it to see if there is still a performance decrease?

Thanks again for your help to solve this case.

Best Regards,

Carlos_A.

0 Kudos
NShah17
Novice
1,718 Views

Hello Carlos, thanks for your response.

This design did not undergo Intel design review services.

Now that you mention it the magnetics are different - we are using a discrete PT12S03 on the reference design and an integrated BELFUSE V890-1AX1-A1 on our custom design. The RF properties seem roughly identical; the only difference I see is the discrete magnetics use Bob Smith termination whereas the integrated magnetics do not.

We will attempt to remove the NVM from the reference design and reprogram with iNVM.

But first I am attempting to attach NVM to our custom design and program the external NVM.

This is a good debug suggestion, thank you.

0 Kudos
OChri
Beginner
1,718 Views

Dear Neel,

Dear Carlos,

we are currently facing the same problem. We have a custom design using the K1 with 2x i210 controllers with attached NVM (firmware: 3.25, 0x800005cf - flashed manually). iperf works perfectly but we have problems transferring specific frame sizes (on both interfaces). We debugged it down to the following problems:

* iperf shows data rates ~ 940 MBit/s, no errors in ifconfig

* specific frame lengths received by the i210 show errors (67 error, 68 ok, 69 error, 70 ok, ... 94 ok, 95 error, 96 ok, 97 ok, ... 160 ok, 161 error, 162 ok, 163 error, ....). This was tested sending 1, 2, 3, ... bytes using nc via TCP. These errors are always reproducible!

* this shows sort of "block errors"

* The error seen via tcpdump/wireshark is always the same: Independently of the payload size sent, in case of an error only the last byte of the TCP payload is changed randomly (compared to the one sent)!

* Enabling RX offloading marks the TCP checksum is correct -> Tcpdump sees wrong data

* Disabling RX offloading shows the error in tcpdump, Linux drops the packet

* Debugging the DMA frame in the igb driver shows that the error is already present there (so no Linux driver/stack issue)

* we also used igb drivers 5.0.3 and 5.3.4.4

@Neel: Did you find a solution yet?

Carlos_A Do you have any idea of what is going wrong?

Best regards,

Olaf

0 Kudos
CarlosAM_INTEL
Moderator
1,718 Views

Hello Olaf,

Thank you for contacting the Intel Embedded Community.

In order to better understand this situation, we would like to address the following questions:

Could you please confirm if your design is based on the guidelines stated at the https://www-ssl.intel.com/content/www/xa/en/secure/intelligent-systems/privileged/gbe-i210-design-guide.html Intel(R) Ethernet Controller I210 Dual Design Guide?

 

In case that this document is inaccessible to you an EDC Privileged account is needed. To learn more about the benefits of a EDC Privileged account go to/www.intel.com/content/www/us/en/embedded/embedded-design-center-support.html http://www.intel.com/content/www/us/en/embedded/embedded-design-center-support.html. Then click on "APPLY NOW" found under the heading, "Apply for extras with privileged access to the Intel EDC¹". After you submit the application, please let us know and we will expedite the review of your application.

 

Could you please tell us if your design has been reviewed by Intel? Please give us details related to this.

Thanks for your collaboration to solve this inconvenience.

Best Regards,

Carlos_A.

0 Kudos
OChri
Beginner
1,717 Views

Hi Carlos_A,

yes, we followed the design guidelines.

Here are some further information:

There is a post on the Nvidia forum saying:

There is a special i210 issue was related to the Intel Ethernet controller generating upstream "VDM Type 1" (Vendor Defined Message) messages. These cause the PCIe interface to hang, and occurred immediately after boot, no specific test was required to cause the issue.

This need disable VDM Type 1 from intel i210 firmware and can't use MCTP VDM over PCIe.

https://devtalk.nvidia.com/default/topic/903957/jetson-tk1/tk1-intel-ethernet-controller-i210-it/post/4760503/# 4760503 https://devtalk.nvidia.com/default/topic/903957/jetson-tk1/tk1-intel-ethernet-controller-i210-it/post/4760503/# 4760503

We also face this issue, where one of the controllers is sometimes not recognized by the PCIe host.

How can we create a firmware disabling this kind of PCIe message? Currently we use a stock firmware: Dev_Start_I210_Copper_NOMNG_4Mb_A2_3.25_0.03.bin

 

Another issue:

Sending frames from the i210 works fine. Unfortunately 2 extra bytes are added to every outgoing frame (independently of frame size), which do not belong to the TCP payload. Wireshark identifies those as "VSS-Monitoring ethernet trailer". Is the i210 adding padding bytes on purpose?

 

 

Best regards,

 

Olaf

0 Kudos
NShah17
Novice
1,718 Views

Hi Olaf, our issue turned out to be a noise filtering problem in the power supply line. Why this does not affect the Jetson is unclear (our power design initially supported the Jetson, and then our custom design). But when powered by a gold standard power supply the issues disappear.

Best regards,

Neel

CarlosAM_INTEL
Moderator
1,718 Views

Hello neelfirst,

Thanks for your reply that has useful information.

Best Regards,

Carlos_A.

0 Kudos
OChri
Beginner
1,718 Views

Hi everybody,

we finally found out what the problem was. We made a mistake in the schematic connecting some (not all) DM/DQS lines to the wrong block of data lines at the DDR3 chip. Using the data swizzle of the K1 obviously did not work on those lines (only working on block base). This resulted in some interchanged bytes of a 64 bit DDR3 access. As long as we accessed those bytes cached (i.e. multiples of at least 32 bit), the alignment of bytes did not matter. Nevertheless we had the mentioned problems when using DMA transactions (which PCIe/our NIC obviously performs) that writes single bytes sometimes.

Thanks for helping out!

Olaf

0 Kudos
Reply