09-30-2009 09:08 AM
Hi,I have the Altera Stratix II GX Dev Board. Im' using the Altera PCIe 9.0 IP Core with Avalon-ST Interface and I ran into some problems. I transfer 4 MB of data in 128 Byte blocks + header. After about 10 (4MB) transfers the transfer rate decreases till no more transfer is possible. If I check the avalon interface via SignalTap II I get the results attached to this post. I use 4 lanes (1 VC) and tx_st_ready is always asserted during transfer. But at some point (like I said, after about 7 transfers) tx_st_ready begins to de-assert. It gets worse and worse till it stays at 0 and I'm not able to transfer any more data. Any idea? The PC uses a Itel DG33TL mainboard (Intel G33 Chipset).
11-22-2009 11:49 AM
Just in case you are still struggling with this problem, and to inform others who might expierence a similar problem.The PCI Express specification states that when a malformed TLP is received, flow control information does not have to be updated. Therefore after a few malformed TLPs the transmitter will run out of credits, blocking any further transmissions. You should check if the TLPs you are sending accord to the specification.
03-08-2010 06:28 PM
mbadelt,Were you able to get to the bottom of the issue that you ran into? I am seeing similar, but slightly different behavior. In my case, I chug along at max rate on a gen2 x8 system until I have transmitted about 8kbytes of data in small 128 byte packets. At this point, I have exhausted all of my credits. I do get credit updates, but at a very low rate and only a few at a time. So, my transfer appears to complete, but it slows down dramatically after the first 8kbytes.
05-27-2010 06:13 PM
Brett,Did you solve this slowdown problem? I get similiar with SOPC PCIe x1 when DMA to Tx. I am testing with both Intel US15W/Z530 and AMD 785NB/745 Chipsets. The first 1k - 2k is full speed then settles into 16Mbyte/s on US15W and 64Mbyte/s on AMD. I see that it takes Altera's SOPC MM to PCIe bridge 6 clock cycles per QW, so I am running the DMA at 125MHz to compensate, but the PCIe core just pushes back and stalls more clock cycles. Both platforms run 160Mbyte/s+ with non-SOPC instantiations.
05-27-2010 06:17 PM
btesar,The issue that I ran into was system related. The application hardware that I was testing was trying to write the same system address over and over, so the memory controller appeared to be holding me back, as opposed to PCIe. When I switched to a more realistic use case, I saw my transmission rates recover.