Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
187 Views

Why don't support 100% traffic performance with N3000 PAC Board?

I did a traffic​ performance test with N3000 PAC Board.

but, the result of 100% performance were not confirmed.

My test environment setup is as follows.

1. ​FPGA Source Version: Factory_Image in Alpha2

2. For test,

modified connect signals(egress_out Avalon-ST <-> ingress_in of Avalon-ST) in source(ccip_std_afu.sv) - please see attached file for detail

3. generate important signals of Avalon-ST as Signal-Tap

4. using traffic analyzer(spirent)

The test results are as follows.

1. ​When 99.99% traffic is sent to 10GbE Link,

. All traffic was received without loss.

2. When 100% traffic is sent to 10GbE Link

. the traffic loss occurred.

at this time, the Ready Signal goes low occurs, and the Pause-Req signal goes high

In my opinion, since the link connection is 1:1(form 1GbE to 1GbE), the Ready Signal should always be high, and the Pause-Req signal should be kept low.

but, In Blue area, When the ready signal of the Avalon-ST Interface goes low, loss occurs.

please help me how can i solve this problem.​ see attached file for detail.

0 Kudos
10 Replies
Highlighted
Moderator
59 Views

Re: Why don't support 100% traffic performance with N3000 PAC Board?

Hi @SYeon​ Thank for the detail explanation of the issue.

 

The reason for packet loss is the you created a design that loops traffic directly from the Ethernet AVST interface AND is sending packets with minimum interframe gap. There is a rate difference between the Spirent Test Center Ethernet traffic generator and the N3000 internal clock. Because of this rate difference, there will be a small amount of packets dropped.

Setting the interframe gap to 13 bytes, will result in no packet drops.

0 Kudos
Highlighted
Beginner
59 Views

Re: Why don't support 100% traffic performance with N3000 PAC Board?

Hi JwChin(Intel), Thank for your reply.

The same result was obtained in an environment where my design block was inserted in the middle of Ethernet without loop connection of the Ethernet Avalon-ST interface.

For this reason, I re-tested using basic Factory-Image, excluding my design block.

I think, if setting IFG (Inter-Frame Gap)​ to 13 bytes, this is the same condition as lowering from 100% to 99.99% traffic in my test.

Because the IFG of 100% Traffic is 12 bytes according to the Ethernet standard. therefore, loss should not occur when the IFG of 100% traffic is 12 bytes and same Link-Speed (from 10GbE to 10GbE).

Based on your opinion, we'll test again with other equipment than spirent on the premise that Spirent's rate (clock) is finely high.

if you can,

please try to test again other equipment (not Spirent Test Center).

After the above test, please do test with directly connect the Spirent Test Center and other equipment.

​I think, if you do all these tests, you will know the cause.

I'll try the same and share my test results.

 

thanks.

0 Kudos
Highlighted
Valued Contributor II
59 Views

Re: Why don't support 100% traffic performance with N3000 PAC Board?

>I think, if setting IFG (Inter-Frame Gap)​ to 13 bytes, this is the same condition as lowering from 100% to 99.99% traffic in my test.

 

Yes, of course using an inter-frame gap of 13 bytes for testing means that you are not testing at full 10 Gbps.

Properly-made switches typically operate at a slightly higher clock than what is required to meet 10 Gbps to avoid packet drops caused by slight mismatch between the clock rate of the switch and connected devices. However, in the case of this FPGA board, the Ethernet IP likely operates at exactly the required clock or maybe even slightly less (e.g. 399.9999 MHz instead of 400.0 MHz), resulting in a few packets being dropped when operating at full speed with minimum inter-frame gap.

 

0 Kudos
Highlighted
Beginner
59 Views

Re: Why don't support 100% traffic performance with N3000 PAC Board?

Hi. HRZ(Customer), Thanks for your interest.

In the result of my test, i know that the reason was caused by a slight difference of the internal operating clock as your opinion.

In the N3000 board, divided into blue and green area. the green area is the user coding area, and the blue area is the N3000 area that the user cannot handling.

To solve the problem, I need to conduct a careful review of the Ethernet PHY & MAC & Avalon-ST Interface blocks.

but these are all included in the Blue area. so I can't review.

The H/W System(physical interface) in communication must support Wired-Speed.

I have to prove Wired-Speed in terms of developing it. but i can't by blue area.

I think, In the Ethernet Link, the receive block will use the recovering clock(locking clock by transmit clock on the other side), and transmit block will use internal clock.

​I guess, need to modify clock of transmit block in the blue area.

 

Currently, the clock supplied ​to the Green Area is designed to use one clock for transmission and reception.

 

thanks.

0 Kudos
Highlighted
Valued Contributor II
59 Views

Re: Why don't support 100% traffic performance with N3000 PAC Board?

You can develop your own design from scratch in the traditional way without the blue and green region and instantiate all the IPs manually and so on, but apart from the fact that that is going to require a huge amount of time and effort, you will not be able to use Intel's software stack on such design either and you will have to develop your own software stack, too. Moreover, it might be very much possible that the Ethernet IP provided by Intel does not guarantee 100% line speed with minimum IFM and you might either have to buy an Ethernet IP from a third-party or develop one yourself. I doubt you would be able to modify Intel's Ethernet IP anyway, since these IPs are typically encrypted. I personally had experience with Xilinx's 1 GbE Ethernet IP some years ago and I encountered the exact same issue you are encountering here: as soon as I tried to test at full 1 Gbps with Spirent Test Center, I got a few dropped packets but there were no dropped packets at 99.99% and lower speeds.

0 Kudos
Highlighted
Moderator
59 Views

Re: Why don't support 100% traffic performance with N3000 PAC Board?

Thanks @HRZ​  for providing your insights and helping others in the community. @SYeon​  I would agree with HRZ's answer to your question.

0 Kudos
Highlighted
Beginner
59 Views

Re: Why don't support 100% traffic performance with N3000 PAC Board?

Thanks for your reply.

I don't want to modify the IP. ​ because, the IP is the domain of the manufacturer , not my domain.

Most of the freely available ​IP is provided by the manufacturer for easy use by users using the resource(PLL, Transceiver, IO, Block Memory, ...) included in the chip as hardware in the FPGA.

​​When IP is generated, RTL Code designed to meet a specific purpose using small resources are generated together.

The generated RTL Code can be partially ​modified and applied with compile.

I have been working on several communication related projects, but I have never had any experience with lack of line speed.

Maybe, This may be due to the fact that MAC is not a free IP, so only PHY (1GbE PHY or 10GbE XAUI, etc.) was used.

but, the blue area of the N3000 is very different from this.

The blue area consists of blocks created with IP and RTL codes using IPs.

This RTL Code support many options at compile time. the option is target(10GbE link or 40G​ link ....), using DMA, Included DDR4 , .....

As the structure changes according to many options and the internal memory map also changes.

 

In conclusion, I'm asking the manufacturer to check the blue area to see what the cause is.

I think, if it is a difficult part to fix, it is necessary to reflect this phenomenon when the manufacturer develops the next version.

 

0 Kudos
Highlighted
Moderator
59 Views

Re: Why don't support 100% traffic performance with N3000 PAC Board?

The cause is the clock difference as described by HRZ. Quoting his/her words:

Properly-made switches typically operate at a slightly higher clock than what is required to meet 10 Gbps to avoid packet drops caused by slight mismatch between the clock rate of the switch and connected devices. However, in the case of this FPGA board, the Ethernet IP likely operates at exactly the required clock or maybe even slightly less (e.g. 399.9999 MHz instead of 400.0 MHz), resulting in a few packets being dropped when operating at full speed with minimum inter-frame gap.

0 Kudos
Highlighted
Beginner
37 Views

99.9% ain't gonna make a carrier-grade.

Hi JW. Working with SYeon, I will have to bring this issue (of not meeting the due wire speed with minimum Ethernet IFGs) to the attention of our project community including all 3 Korean mobile operators and major network vendors.

Note that this is one of biggest national R&D projects where a total of 9 Intel N3000 PACs  are used to feature 'URLLC (Ultra Resilient and Low Latency Communication)' for 5G MEC (Mobile Edge Computing) use cases. Hence 'non-wire rate' PACs would hardly be made acceptable, considering  the stringency of the project. 

Can't we expect Intel to deliver the advertised (100% wire speed) feature e.g. by releasing a quick update??

Tags (1)
0 Kudos
Highlighted
Valued Contributor II
18 Views

Re: 99.9% ain't gonna make a carrier-grade.

@Moon_Lee Performance aside, are you really planning to use Intel's propitiatory and closed-source network IPs which you have no control over, in a national project? A project like this should have very stringent security requirements including absolute avoidance of any piece of proprietary code/IP and I would assume all code for such project would have to be developed in-house to ensure its security and functionality. At the end of the day, if you want to push Intel to do what you want, you would be better off going through your local FAE (i.e. whoever you purchased the boards from); you must have access to a high-level FAE if you are working on a national project.

0 Kudos