Application Acceleration With FPGAs
Programmable Acceleration Cards (PACs), DCP, DLA, Software Stack, and Reference Designs
Intel Support hours are Monday-Fridays, 8am-5pm PST, except Holidays. Thanks to our community members who provide support during our down time or before we get to your questions. We appreciate you!

Need Forum Guidance? Click here
Search our FPGA Knowledge Articles here.
426 Discussions

Why don't support 100% traffic performance with N3000 PAC Board?


I did a traffic​ performance test with N3000 PAC Board.

but, the result of 100% performance were not confirmed.

My test environment setup is as follows.

1. ​FPGA Source Version: Factory_Image in Alpha2

2. For test,

modified connect signals(egress_out Avalon-ST <-> ingress_in of Avalon-ST) in source( - please see attached file for detail

3. generate important signals of Avalon-ST as Signal-Tap

4. using traffic analyzer(spirent)

The test results are as follows.

1. ​When 99.99% traffic is sent to 10GbE Link,

. All traffic was received without loss.

2. When 100% traffic is sent to 10GbE Link

. the traffic loss occurred.

at this time, the Ready Signal goes low occurs, and the Pause-Req signal goes high

In my opinion, since the link connection is 1:1(form 1GbE to 1GbE), the Ready Signal should always be high, and the Pause-Req signal should be kept low.

but, In Blue area, When the ready signal of the Avalon-ST Interface goes low, loss occurs.

please help me how can i solve this problem.​ see attached file for detail.

0 Kudos
14 Replies

Hi @SYeon​ Thank for the detail explanation of the issue.


The reason for packet loss is the you created a design that loops traffic directly from the Ethernet AVST interface AND is sending packets with minimum interframe gap. There is a rate difference between the Spirent Test Center Ethernet traffic generator and the N3000 internal clock. Because of this rate difference, there will be a small amount of packets dropped.

Setting the interframe gap to 13 bytes, will result in no packet drops.


Hi JwChin(Intel), Thank for your reply.

The same result was obtained in an environment where my design block was inserted in the middle of Ethernet without loop connection of the Ethernet Avalon-ST interface.

For this reason, I re-tested using basic Factory-Image, excluding my design block.

I think, if setting IFG (Inter-Frame Gap)​ to 13 bytes, this is the same condition as lowering from 100% to 99.99% traffic in my test.

Because the IFG of 100% Traffic is 12 bytes according to the Ethernet standard. therefore, loss should not occur when the IFG of 100% traffic is 12 bytes and same Link-Speed (from 10GbE to 10GbE).

Based on your opinion, we'll test again with other equipment than spirent on the premise that Spirent's rate (clock) is finely high.

if you can,

please try to test again other equipment (not Spirent Test Center).

After the above test, please do test with directly connect the Spirent Test Center and other equipment.

​I think, if you do all these tests, you will know the cause.

I'll try the same and share my test results.



Valued Contributor II

>I think, if setting IFG (Inter-Frame Gap)​ to 13 bytes, this is the same condition as lowering from 100% to 99.99% traffic in my test.


Yes, of course using an inter-frame gap of 13 bytes for testing means that you are not testing at full 10 Gbps.

Properly-made switches typically operate at a slightly higher clock than what is required to meet 10 Gbps to avoid packet drops caused by slight mismatch between the clock rate of the switch and connected devices. However, in the case of this FPGA board, the Ethernet IP likely operates at exactly the required clock or maybe even slightly less (e.g. 399.9999 MHz instead of 400.0 MHz), resulting in a few packets being dropped when operating at full speed with minimum inter-frame gap.



The summary of the reply to my question is as follows.

The facts of the conversation above is
1. N3000 board does not support 100% Wired-Speed.
   The reason is that the clock used by Intel Ethernet IP is the same or slightly lower than the clock required.
   However, it is only true in 10G x 8 Mode, and other 40G Mode is not tested and cannot be guaranteed.

2. Intel has no plans to modify and supplement Intel Ethernet IP to support 100% Wired-Speed.

In conclusion.
The N3000 board cannot be used for projects requiring 100% Wired-Speed.


Is the above correct?


Hi. HRZ(Customer), Thanks for your interest.

In the result of my test, i know that the reason was caused by a slight difference of the internal operating clock as your opinion.

In the N3000 board, divided into blue and green area. the green area is the user coding area, and the blue area is the N3000 area that the user cannot handling.

To solve the problem, I need to conduct a careful review of the Ethernet PHY & MAC & Avalon-ST Interface blocks.

but these are all included in the Blue area. so I can't review.

The H/W System(physical interface) in communication must support Wired-Speed.

I have to prove Wired-Speed in terms of developing it. but i can't by blue area.

I think, In the Ethernet Link, the receive block will use the recovering clock(locking clock by transmit clock on the other side), and transmit block will use internal clock.

​I guess, need to modify clock of transmit block in the blue area.


Currently, the clock supplied ​to the Green Area is designed to use one clock for transmission and reception.



Valued Contributor II

You can develop your own design from scratch in the traditional way without the blue and green region and instantiate all the IPs manually and so on, but apart from the fact that that is going to require a huge amount of time and effort, you will not be able to use Intel's software stack on such design either and you will have to develop your own software stack, too. Moreover, it might be very much possible that the Ethernet IP provided by Intel does not guarantee 100% line speed with minimum IFM and you might either have to buy an Ethernet IP from a third-party or develop one yourself. I doubt you would be able to modify Intel's Ethernet IP anyway, since these IPs are typically encrypted. I personally had experience with Xilinx's 1 GbE Ethernet IP some years ago and I encountered the exact same issue you are encountering here: as soon as I tried to test at full 1 Gbps with Spirent Test Center, I got a few dropped packets but there were no dropped packets at 99.99% and lower speeds.


Thanks for your reply.

I don't want to modify the IP. ​ because, the IP is the domain of the manufacturer , not my domain.

Most of the freely available ​IP is provided by the manufacturer for easy use by users using the resource(PLL, Transceiver, IO, Block Memory, ...) included in the chip as hardware in the FPGA.

​​When IP is generated, RTL Code designed to meet a specific purpose using small resources are generated together.

The generated RTL Code can be partially ​modified and applied with compile.

I have been working on several communication related projects, but I have never had any experience with lack of line speed.

Maybe, This may be due to the fact that MAC is not a free IP, so only PHY (1GbE PHY or 10GbE XAUI, etc.) was used.

but, the blue area of the N3000 is very different from this.

The blue area consists of blocks created with IP and RTL codes using IPs.

This RTL Code support many options at compile time. the option is target(10GbE link or 40G​ link ....), using DMA, Included DDR4 , .....

As the structure changes according to many options and the internal memory map also changes.


In conclusion, I'm asking the manufacturer to check the blue area to see what the cause is.

I think, if it is a difficult part to fix, it is necessary to reflect this phenomenon when the manufacturer develops the next version.



The cause is the clock difference as described by HRZ. Quoting his/her words:

Properly-made switches typically operate at a slightly higher clock than what is required to meet 10 Gbps to avoid packet drops caused by slight mismatch between the clock rate of the switch and connected devices. However, in the case of this FPGA board, the Ethernet IP likely operates at exactly the required clock or maybe even slightly less (e.g. 399.9999 MHz instead of 400.0 MHz), resulting in a few packets being dropped when operating at full speed with minimum inter-frame gap.


Hi JW. Working with SYeon, I will have to bring this issue (of not meeting the due wire speed with minimum Ethernet IFGs) to the attention of our project community including all 3 Korean mobile operators and major network vendors.

Note that this is one of biggest national R&D projects where a total of 9 Intel N3000 PACs  are used to feature 'URLLC (Ultra Resilient and Low Latency Communication)' for 5G MEC (Mobile Edge Computing) use cases. Hence 'non-wire rate' PACs would hardly be made acceptable, considering  the stringency of the project. 

Can't we expect Intel to deliver the advertised (100% wire speed) feature e.g. by releasing a quick update??

Valued Contributor II

@Moon_Lee Performance aside, are you really planning to use Intel's propitiatory and closed-source network IPs which you have no control over, in a national project? A project like this should have very stringent security requirements including absolute avoidance of any piece of proprietary code/IP and I would assume all code for such project would have to be developed in-house to ensure its security and functionality. At the end of the day, if you want to push Intel to do what you want, you would be better off going through your local FAE (i.e. whoever you purchased the boards from); you must have access to a high-level FAE if you are working on a national project.


@HRZ  Open source adoption has been made common place in national R&Ds, at least here in Korea. I would say, hardening security up to the required level can be agnostic to whether codes are open or closed. Moreover, building everything from a green grass is neither possible nor desirable for modern telco applications these days. Even most of Gov. R&D stakeholders here are well aware of the paradigmatic shift towards Open Networking & Computing. (Well, the only exception would still be the Defense sector though.) No problem at all in our leveraging Intel N3000 PAC platform, as long as Intel stay being a trusted supplier. 

So, I am still wondering how Intel would want to react to this non-carrier-grade stigma..?



Thanks @HRZ​  for providing your insights and helping others in the community. @SYeon​  I would agree with HRZ's answer to your question.


Thanks for your results and analysis!

We had the same problem with the board test for the N3000-1(8x10G) with spirent  and N3000-2(2x2x25G) with IXIA, whcih is all traffic was received without loss in the speed of 99.99% and the traffic loss occurred in the speed of 100% .
We also connected ING-IN and EGR-OUT signals directly for output.
I have read your analysis and done signal tap capture on the 25G board card, and found that in the speed of 100% the ready signal would be '0' sometimes, and during this period there may be valid packets.
The difference with your results is that our pause_req signal is always '0'.


Please check,   an option for pause function when compiling.

If the Pause function is activated by this option,

In the loss environment will be lower than 99.99% by the pause function.

My test environment used 10GbE Link, so it may be a little different.

Also, to check the correct performance, i did the test after disconnecting the pause signal.


I couldn't find a solution for Loss in the Green area.

I explained the test results and finished the project with 99.99% performance.

My project manager understood that the cause of 99.99% of the performance was in the Blue area,

and I was able to close the project.


if there must be no loss , you can use the Pause function to prevent 100% transmission from the sender.