Nios® V/II Embedded Design Suite (EDS)
Support for Embedded Development Tools, Processors (SoCs and Nios® V/II processor), Embedded Development Suites (EDSs), Boot and Configuration, Operating Systems, C and C++
12612 Discussions

Question regarding relationship between ALTGX channel width and effective data rate

Honored Contributor II



I've been encountering some issues that I believe are related to how I configured the transceiver cores on my Stratix IV via ALTGX. For details sake, I'm using Terasic's TR4 development board with the Stratix IV EP4SGX230KF40C2 FPGA. 


My goal is to transmit a bit stream via the FPGA's onboard transceivers - which can be thought of as generating a PWM signal. I need to do this at 6250.0 Mbps while using a clock frequency of 312.5 MHz or 156.25 MHz.  


I've configured ALTGX as follows: 


Protocol: Basic 

Subprotocol: X4 

Operation Mode: Transmitter Only 

Number of Channels: 4 (The channels receive independent bit streams) 

Deserializer Block Width: Double 

Channel Width: 40 bits 

Base Setting: Data Rate 

Effective Data Rate: 6250.0 Mbps 

Input Clock Frequency: 312.5 MHz 

Base Data Rate: 6250.0 Mbps 


While I can transmit the waveforms, there seems to be an intermittent problem. To explain, I can generate a test waveform that produces a 50-50 PWM signal. This waveform is then manipulated in logic in such a way that modifies the high and low times of the signal so that my original 50-50 PWM signal is transformed into a 60-40 PWM signal. The problem is that this works some of the time. That is the manipulation appears to be applied to only a portion of the signal in a periodic manner: 50 High, 50 Low, 50 High, 50 Low, 60 High, 40 Low, 50 high, 50 Low, ...  


I used Signal Tap to watch the data flow through the logic that modifies the waveform and it seems to manipulate the signal correctly. For this reason, I believe that the problem lies in transferring the data and may be due to how I'm loading the data into the transceivers.  


The logic that manipulates the waveforms/bit streams runs at 312.5 MHz and operates on 40-bit chunks of data. The 40-bits is then fed into the transceivers via its tx_datain port. The pll_inclk of the transceiver core generated by megawizard is connected to a 312.5 MHz PLL clock signal. The cal_blk_clk is connected to a 78.125 MHz clock. tx_coreclk is fed by coreclkout (it loops back around), and tx_clkout is left dangling. I think that I may be writing data to tx_datain quicker than it is being pulled into the transceiver resulting in this behavior I'm observing. According to megawizard, I can achieve this same data rate (6250 Mbps) with a channel width of 40 at an input clock frequency of 156.25 Mhz (or 125, 250 MHz, etc). I read through the datasheets, but I must be missing something. What is the relationship between the channel width, effective data rate, and the input clock frequency? For my application, I need to transmit all 40 bits that I'm writing to tx_datain and they need to be transmitted at a data rate of 6250 Mbps. This data is currently being refreshed at a frequency of 312.5 MHz. Should I actually be refreshing that data at a frequency of 156.25 MHz? 


Any insight is appreciated.
0 Kudos
0 Replies