I am simulating a bidirectional 10G Base-R example design MAC with the only change being "Use legacy Ethernet 10G MAC Avalon Streaming Interface" option enabled. In my testbench the core has all config registers set to default and is operating as expected.
When I send avalon streaming data to the core I am seeing the IP core deassert the avalon_st_tx_ready signal if I do not average an gap between packets of greater than 3 clock (156.25 MHz) cycles. My avalon_st_tx_data is 64 bits wide due to the legacy interface and as such this gap on the input interface equates to an IPG of 24 byte times rather than the default 12. I cannot see anywhere in the user manual how to get the core to run at the correct rate whilst using the legacy interface.
Do you have any suggestions?
Just wonder have you refer to
- chapter 184.108.40.206. Migration—Maintains 64-bit on Avalon Streaming Interface (page 30) of user guide doc link to ensure 312.5MHz and 156.15MHz clocking are connected correctly ?
I also found additional AN735 that talks about the migration guideline and highlight the IP difference to user
Thank you for these suggestions.
I can say that I did read the first chapter that you mentioned about connecting the clocks correctly and have set these up to be synchronous in my simulation. I had not seen that AN before; however, having read through it, there is no new hints/tips that might suggest what is causing this issue.
I wonder if what I'm seeing is due to the built in averaging of the IPG feature in the 10G variant? To test the behaviour I send, to the core on the 64b AVST interface, 999 packets with a gap between them of 3 clock cycles. Then I send 1 packet after a gap of two.
I repeat this cycle 20 times for good measure and after the 12 cycle, when the 13th packet with a gap of only 2 is sent to the core, I start to see the ready signal deasserting after every time I send the packet with the shorter gap before it.
Thank you very much for your help @Deshi_Intel