Hi all,I am facing a problem that I don't fully understand. I have an instantiation of the TSE core that I integrated into my application. In simulation everything (Receiving and Sending) works perfectly fine. Fitted into a Cyclone V (GX Development Kit) the Sending part doesn't work. With SignalTap I checked that the Register space configuration worked nicely. Also packages sent to the Board are correctly received on RGMII_IN and are forwareded nicely to ff_rx_data. As a result my logic prepares an answer and puts it into the core via ff_tx_data. In the simulation now the data are transmitted correctly to the PHY on RGMII_OUT. According to SignalTap the TX data transfer into the Ethernet core works perfectly fine, just as simulated. But there is no activity going out of the core. The details on how I use the ff_tx_data you can see here: https://www.alteraforum.com/forum/attachment.php?attachmentid=7155 It is done just as shown in the TSE manual. As tx_clk I assigned a 125Mhz clock generated from a PLL. That clock I assigned as well to GBE_GTX_CLK. Basically, there is no difference between the simulation and the real deal. I would be very happy if someone could point out to me why the ethernet core fitted into the FPGA does not put out the transmit data, but does so in simulation. Thank you very much Sebastian
No idea? :)Maybe to add some information: I don't connect the TSE core via MDIO to the PHY (Marvell 88E1111), but since the only troublesome part at the moment is about the core not responding to the input data ff_tx_data, the view on the world outside the core according to me is anyways not of any concern at that point. Maybe someone has an idea why the core does not put out transmit data on RGMII_OUT, even though it does in simulation and the user application data on ff_tx_data seem to be transfered correctly into the core?
There are two things I can think of [list][*]your packet is too small (IIRC the minimum Ethernet packet size is 64 bytes) and the TSE could decide to drop it[*]the tx part of the core isnæt enabled. I think there is a configuration register where you can enable or disable separately the tx and rx parts[/list]I know you said you checked the configuration registers, but it never hurts to check once more ;)However I don't know why you see a difference between simulation and synthesis.
Thank you very much for your thoughts!!!!...but I re-checked, the TX is clearly enabled in the registers.Well, a nice thing of the TSE is that it would put data in the padding field in case the packet size is too small.
IIRC it doesn't do that. The TSE adds the CRC, but not the padding bytes if the packet to transmit is too small. It can generate an error if a received packet is too small though.
Since it works in simulation and not in hardware, maybe double check that it truly works by using the (post-synthesis) Gate Level simulation. If that doesn't work, it points at a difference between your testbench and synthesis which you ought to be able to resolve. If it does work, it points squarely at the physical hardware differences which maybe will need more investigation.
I am having an issue where Tx data is corrupted intermittently. Most of the time the Rx and Tx work fine but after a while I get dropped packets due to an FCS error. This may not be the same problem but could be related.Have you looked at the RGMII signals with a scope to check that there is definitely no data being output? (it may be being corrupted during transmission over the RGMII which will cause an FCS error and your receiver will probably reject the packet with no evidence it was ever sent). Also look at the clock output and Tx control signals to check for activity. If data is being output you can put the PHY into loopback mode to analyse the Tx packets.
It can be a timing problem on the RGMII pins too. If you set the PHY in loopback mode, you can at least check that you receive back on the RGMII pins the same packets than those you send. If they are identical, then the timing is probably right.
--- Quote Start --- It can be a timing problem on the RGMII pins too. If you set the PHY in loopback mode, you can at least check that you receive back on the RGMII pins the same packets than those you send. If they are identical, then the timing is probably right. --- Quote End --- Note also that there are options in the Marvell 88E1111 which determine the phase delay of the source synchronous capture clocks tx and rx paths which can be set over MDIO. One has to sign a non disclosure with Marvell to see the details.
I experienced a similar issue, now solved, the problem was with timing of data and clock signals. You need to constrain the design correctly to make the data edge-aligned or center aligned depending on your PHY settings (described aHP mode or 3COM). The key to solving my problem was to run the clock signal through an ALT_DDIO buffer rather than directly to the output pin. This means the clock is routed through the same logic as the data, which should help with timing closure (this is not an obvious thing to do from reading application notes!!!)The following might help: http://www.altera.com/literature/an/an477.pdf?gsa_pos=1&wt.oss_r=1&wt.oss=an477 http://www.altera.com/literature/an/an433.pdf?gsa_pos=1&wt.oss_r=1&wt.oss=an433
We are experiencing this same issue, but still can't get the tx to work even after the following experiments:1) with altddio for enet_gtx_clk and constraining the output timing for edge-aligned (configured internal-delay option of 88E1111 PHY) 1b) enabled the Programmable IOE Delay (D5 with value of 31) on the enet_gtx_clk, which according to the C5 datasheet should shift the clock from 0.5 ns (min) to 1.2 ns (max) 3) with altddio for enet_gtx_clk, but driven by the 90deg phase out of the same PLL that supplies the tx_clk to the tx_data/ctl We are using the Arrow SoCKit with the Terasic HSMC Communications board. Has anyone gotten this combination to work? Terasic does not supply board layout files (and so we can't probe the RGMII test points, assuming they even exist). As a last-ditch effort, I might try driving the enet_gtx_clk directly from the PLL 90deg and/or implementing our own RGMII interface, to add a facility loopback... Any other ideas?
Hello guys,Experiencing same issue as the Thread starter: Functional simulation works perfect, but when programming and looking into SignalTap the Tx lines are always zero. Checked the configuration registers in SignalTap - all ok. Checked Statistics counters - counter of good transmitted packets is counting with every packet transmitted but Tx RGMII data lines and Tx RGMII Control line are always zero. Any ideas?