- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Just a doc about constraining Ssync DDR interfaces. Probably complements existing documentation, but pretty detailed with examples.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
i have a question about this... i may not have completely understood the document but in my case the transmitter and the receiver won't have a PLL neither i'll have a PLL on the board. in this case i assume the board has uniform delay on clock and data.
from the document it's not clear how to set up the whole thing if a PLL is not involved in fact if for example i create my virtual clock as: create_generated_clock -name ODCLK -source [get_ports {iDCLK}] [get_ports {oDCLK}] where oDCLK is the source synchronous clock pin, and define data delays as: set_output_delay -max -clock DCLK -reference_pin [get_ports {oDCLK}]# # [get_ports {oMUXDATA[*]}] set_output_delay -min -clock DCLK -reference_pin [get_ports {oDCLK}]# # [get_ports {oMUXDATA[*]}] set_output_delay -clock_fall -add_delay -max -clock DCLK -reference_pin [get_ports {oDCLK}]# # [get_ports {oMUXDATA[*]}] set_output_delay -clock_fall -add_delay -min -clock DCLK -reference_pin [get_ports {oDCLK}]# # [get_ports {oMUXDATA[*]}] when i do the timing analysis it seems to me timequest is not considering the delay of the output clock and is relating the data arrival to the internal clock rather than the external one regardless of the -reference_pin directive. i expect that even without 90 degrees shift and with proper settings it should be possible to instruct the router to shift the output clock in a way that there's enough room for latching data at the other hand safely the reason why i don't want/can't use a PLL is that my input clock rate varies largely as data is coming frm a DVI receiver which can output between 25 and 165 MHz so there won't be a one-size-fits-all setup for the PLL that would make it work... thank you in advance for your help!- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
1) Run report_timing with the -detail option set to full_path. Then look at the Data Required Path. It should start from the iDCLK and trace through the FPGA to oDCLK.
2) You're mixing constraints above. You're doing a generated_clock on the output port, yet the set_output_delay constraint doesn't use this clock, and instead uses DCLK and -reference_pin. Both should work, but I recommend not using -reference_pin, and instead having "set_output_delay -clock ODCLK..." 3) I assume this is double-data rate? The reason a PLL is generally used is because it doesn't vary over PVT. Let's look at 166MHz clock, or a 3ns data window. To center the clock on the data, the clock or data path must be delayed 90 degrees, or 1.5ns. Delays vary considerably over PVT. So in the slow model, it might have to add 2ns, which is 1ns in the fast model. Basically a whole 1ns of variance is wasted that cuts into your data window. If a PLL were used to shift the clock 1.5ns, it would be shifted 1.5ns at the slow and fast models. 4) You're running at a slow enough rate that you might be able to get away with it. Source-Synch interfaces run at 300MHz+ for DDR2/3 and at 1Gbps for LVDS. At those rates, that variances wouldn't be acceptable. (One soulution for you would be to use a PLL and reconfigure it for different frequencies. It's not simple to do, but not that difficult.)- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
thanks for the quick response!
i will check the report again tomorrow but if i'm not wrong i was returning same source and destination clocks. regarding mixed settings is that i first tried without the reference clock and i still got the same input and output clock in the report. probably this comes from the fact input and output clocks are just hardwired however i assume the report should take into account the delay from internal logic to the pin while it seems to me the report i always seems to consider output pin data clocked by the internal node (pin), not the external pin (port). as i mentioned i would love to use a PLL however reconfiguring it on the fly would require knowing the input frequency which is not possible as i don't have a fixed frequency clock to measure the input one. in addition to this i don't completely understand the reason for 90 degrees shift as this seems to reduce the actual timing window since hold time is usually much shorter than setup hence i guess that in a system where data changes some time after clock you get the best window since you have huge setup and small hold which usually is sufficient. using the PLL of course allows shifting the clock with a fixed relationship to the input one however data will still have delays that vary with PVT and i assume the variation would be proportional to the one the clock would have if it doesn't have a PLL in the path hence in source synchronous not using a PLL should somehow increase the timing window rather than reduce it... what am i missing? thanks!!!- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If using -reference_pin, there's only one clock you've defined, so it would list them as the same clock in the summary section. If using generated_clock, it should be a different latch clock. The important thing is item 1) above. The latching clock is the clock coming into the FPGA, through the FPGA, and out the clock port you've designated.
If you don't know what the frequency is, then a PLL won't work. (You could probably do something complicated. Note that PLL's work over a range, i.e. if a PLL is running at 100MHz, you could probably fluctuate it's input by at least 20MHz. So if you had some oversampling circuit that could determine roughly the PLL frequency... anyway, you get the idea, and that it would be a PITA) Setup and hold should be equivalent in a source synchronous interface. Basically that means the receiving device went through the pains to make sure the clock and data are equivalent in that device. Many source-synchronous interfaces specifcy skew, i.e. the transmitter may have 500ps of skew between clock and data, and the receiver can accept up to 500ps of skew, or something like that. If the Tsu and Th of the receiving device are not equivalent, that may be their way of saying they've phase-shifted the clock for you, or something like that.. For your last paragraph, the clock and data delay are almost the same whether you use a PLL or not. Remember that the data output delay is: clock input -> possibly through a PLL -> global clock tree -> Tco of DDR register in IO cell -> Output delay The clock path should pretty much look identical(at some point the clock tree splits, one path to feed the data and one to feed the clock). But as you can see, adding a PLL or not affects both paths equally, so it doesn't hurt. If the PLL wasn't used to feed the data too, then it would be a problem. My recommendation is to close timing on the 165MHz, and in theory you should meet timing on all the lower frequencies(assuming Tsu/Th are equivalanet requirements). When you go to slower rates you'll just barely make timing on setup and tons of hold margin, but as long as you meet timing it doesn't matter.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page