Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Valued Contributor III
804 Views

Does the fast output register ensure consistent timing?

Hi, 

 

I have some output pins in my FPGA design (actually bi-dir, should not matter) which connect to some external component in a modular system. The data rate on those pins is variable, and the actual delay from the FPGA to that external component is adjustable (using a programmable delay line IC). 

 

Since everything is variable, I did not put any constraints into the SDC-file about these signals. I don't care about the delay. I can trim the delay line to ensure proper timing. 

 

Now I just found out that the optimum delay line delay changes when I make changes to the FPGA design and re-synthesize it. Okay, I didn't think about that before, but it makes sense, as the launching register might be closer or farther away from the I/O pin. The problem is that I have to determine the optimum delay line value after each FPGA change. 

 

My first idea was to come up with some bogus timing constraint, to enforce the fitter to at least generate the same timing (within some range, sure) for each synthesis. But I'm wondering if a better approach would be to just enable the fast output register option? I think I can rely on the clock distribution network inside the FPGA not to change, and I can expect the exact timing not to change as the output register is always at the same location. 

 

Are those assumptions correct? What's your thought on this? 

 

Note: I don't care about the exact timing, as long as it does not deviate too much after a re-synthesis (a few 100ps are accepted). 

 

 

Best regards, 

GooGooCluster
0 Kudos
2 Replies
Highlighted
Valued Contributor III
24 Views

If it's a device that has true IO registers, then it should work. (Even if it doesn't have true IO registers, that assignment tells the fitter to try and put them into the LAB closest to the pin). Note that if the device has different types of globals, it can change fit to fit, e.g. a global has different timing than a regional. 

I prefer a timing constraint because you now have something that will actively tell you if the timing changed. Try this: 

set_max_delay -to [get_ports {pin_names*}] 10.0 

set_min_delay -to [get_ports {pin_names*}] 0.0 

Now run TimeQuest and see how much slack you have with report_timing. Reduce the max delay until it's small enough that you would notice a change(you don't have to refit, you can just rerun TimeQuest).
0 Kudos
Highlighted
Valued Contributor III
24 Views

Hi Rysc, 

 

thank you for your answer! And sorry for my late reply. I followed your advice and set some min/max delay. Fiddling with the values was a bit more tricky than I thought, though. 

 

Anyway, in the meantime I was able to circumvent the whole problem by implementing an ad-hoc delay calibration. I can say that I get consistent timing after re-synthesizing with a different seed (didn't work before), but due to the new delay cal I didn't test this in hardware yet. 

 

 

Best regards, 

GooGooCluster
0 Kudos