Programmable Devices
CPLDs, FPGAs, SoC FPGAs, Configuration, and Transceivers
21584 Discussions

Set-up and hold time of ALTDDIO megafunction module

Altera_Forum
Honored Contributor II
1,367 Views

I am using the ALTDDIO megafunction in an Arria V device to capture incoming data of the order of 1Gbps and my timing simulation (using Time Quest Analyser) tells me that delays should be so adjusted that set-up and hold time of at least 0.4ns must be provided.  

Is there a device datasheet or any other documentation for Arria V which confirms what the requirement of minimum set-up and hold time for a data being clocked into the ALTDDIO module is?
0 Kudos
5 Replies
Altera_Forum
Honored Contributor II
579 Views

Setup and hold requirements are essentially the same for all core registers, not only DDIO. I don't see a particular core register timing specification with the Arria V handbooks I checked (there may be newer revisions). But the requirement is reflected in the sample window specification for LVDS IO and discussed in the respective handbook parts.

0 Kudos
Altera_Forum
Honored Contributor II
579 Views

For 1Gbps, you'll want to use the altlvds_rx megafunction instead of altddio. The altlvds is not just an I/O element, but the whole interface. It uses a dedicated LVDS clock tree to feed from the PLL to the I/O, which is much more tightly controlled since it only feeds the I/O. It also doesn't use the DDIO, and instead uses a single capture register clocked at the full data rate, so there is no rise-fall variation.  

When using this structure, you will want to run Report RSKM in TimeQuest. Also look at how the Sampling Window is calculated. From a timing perspective, the only major thing you need to do is when creating the megafunction, is tell it the phase-relationship between rx_in and rx_inclock. It will then shift the clock into the middle of the data eye. Note that you can put set_input_delay constraints on the inputs, but they're barely used. TQ just takes calculates the max minus min values and uses that to calculate RSKM. For example, if the set_input_delay -max was 0.4ns and the -min was 0.1ns, then the difference is 0.3ns, so 0.3ns of your 1ns data window is used externally. As long as the Sampling Window is less than 0.7ns, it meets timing. It doesn't look at the individual values though, i.e. if your -max was 10.4ns and the -min was 10.1ns, you'd get the same analysis. 

At 1Gbps, you might want to use Dynamic Phase Alignment(DPA). It's just another switch in the altlvds megafunction. With this, there is no timing analysis at all, as the altlvds block just oversamples your incoming data stream and automatically chooses the best sampling point.
0 Kudos
Altera_Forum
Honored Contributor II
579 Views

Thanks FvM and Rysc for the response. I will go ahead with the design assuming that the timing is similar to other core registers. Unfortunately, in one of my implementations, I cannot use PLL mode of the ALTLVDS module, since there are far too many clocks from the external devices in my design. Hence I want to find the limits at which a non-PLL design can work. 

Thanks again, 

--Shantanu. 

 

 

 

--- Quote Start ---  

Setup and hold requirements are essentially the same for all core registers, not only DDIO. I don't see a particular core register timing specification with the Arria V handbooks I checked (there may be newer revisions). But the requirement is reflected in the sample window specification for LVDS IO and discussed in the respective handbook parts. 

--- Quote End ---  

0 Kudos
Altera_Forum
Honored Contributor II
579 Views

The setup and hold timing is not good. (The on-die variation in the models is really large). For 1 Gbps I believe you'll have to use altlvds logic.  

Are the devices transmitting all referenced to the same clock? If so, you can use a single PLL and DPA to clock them all in. DPA basically oversamples on each incoming channel, and calibrates to it. So the phase-relationship between clock and data does not matter at all. You just want to make sure the incoming clock has a 0PPM difference from the data(i.e. they are all generated from the same source clock upstream). I have seen designs with one device sending clock and data, and the clock line is used to capture data from multiple other devices. I've also seen where many devices send data to the FPGA, and the clock sent with all of them is ignored. Instead, the base clock that drives the transmitters also drives the FPGA and is used for DPA. 

If your devices are all driven from different reference clocks(so they are the same frequency on paper but in reality will drift from each other), you can use DPA with CDR(clock-data recovery). I can't remember if AV supports this, but you can give it a try. 

DPA is a really nice feature, but there is one caveat. If capturing data from multiple devices, the sample point will probably not be near the middle of the clock for some interfaces. What that means is that in a single interface, some channels might be off a bit(it will be the correct data, just shifted one bit). In that case, you need the transmitter to send a test pattern and then add some logic to add a shift in the DPA to align for that. There are some control signals in DPA for doing that.  

Good luck.
0 Kudos
Altera_Forum
Honored Contributor II
579 Views

Thanks, Rysc for your valuable inputs. I understand that using DDIO interface might not give consistent results across pins/ devices at 1Gbps.  

We have N different clocks each sending out 8 channels of data, synchronised to the respective clock. All the clocks are of the same frequency but of different phases. N can be higher than the number of PLLs that any Arria V device can provide. (example N=8 or N=16). Keeping this constraint in mind, I was trying to use non-PLL mechanisms.  

My question is as follows: 

(1) If using CDR is a solution, will it work when the data does not have embedded clock information? In an initial training pattern can be generated, will it suffice if the data is constantly held at '0' or '1' with very few transitions? 

(2) If using non-PLL mechanism such as ALTDDIO is an option, what is the maximum frequency (if not 1Gbps) at which we can expect consistent output. 

Thanks again for your quick response. 

Regards, 

--Shantanu. 

 

 

 

--- Quote Start ---  

The setup and hold timing is not good. (The on-die variation in the models is really large). For 1 Gbps I believe you'll have to use altlvds logic.  

Are the devices transmitting all referenced to the same clock? If so, you can use a single PLL and DPA to clock them all in. DPA basically oversamples on each incoming channel, and calibrates to it. So the phase-relationship between clock and data does not matter at all. You just want to make sure the incoming clock has a 0PPM difference from the data(i.e. they are all generated from the same source clock upstream). I have seen designs with one device sending clock and data, and the clock line is used to capture data from multiple other devices. I've also seen where many devices send data to the FPGA, and the clock sent with all of them is ignored. Instead, the base clock that drives the transmitters also drives the FPGA and is used for DPA. 

If your devices are all driven from different reference clocks(so they are the same frequency on paper but in reality will drift from each other), you can use DPA with CDR(clock-data recovery). I can't remember if AV supports this, but you can give it a try. 

DPA is a really nice feature, but there is one caveat. If capturing data from multiple devices, the sample point will probably not be near the middle of the clock for some interfaces. What that means is that in a single interface, some channels might be off a bit(it will be the correct data, just shifted one bit). In that case, you need the transmitter to send a test pattern and then add some logic to add a shift in the DPA to align for that. There are some control signals in DPA for doing that.  

Good luck. 

--- Quote End ---  

0 Kudos
Reply