Intel® Quartus® Prime Software
Intel® Quartus® Prime Design Software, Design Entry, Synthesis, Simulation, Verification, Timing Analysis, System Design (Platform Designer, formerly Qsys)
17049 Discussions

clock skew vs clock latency

Altera_Forum
Honored Contributor II
2,097 Views

Hi everyone, 

 

I have a question about the different results of specifying clock skew vs. specifying clock latency in SDC. 

 

As pointed out by Rysc in his popular manual timequest user guide and also in other posts (for example, http://www.alteraforum.com/forum/showthread.php?t=5294&highlight=set_clock_latency), set_clock_latency has the same function as the clock skew we specify in set_input/output_delay constraints. My understanding is that they are equivalent in timing analysis. That is, if we apply these two methods to the same timing netlist after compilation, we will get the same STA result from TimeQuest. 

 

But I find they have different effects when used as timing constraints to compile a design.  

 

In my design, the clock signal from an on-board oscillator goes to an ADC and an FPGA, and the output data from the ADC go to the FPGA. It is estimated that the clock arrives at FPGA 0.5 ns later than it arrives at ADC. 

 

So taking the clock at ADC as the reference point and creating a virtual clock, I can use the following two methods to constrain the ADC input delay to FPGA: 

 

Method (1):  

 

set ADCLK_skew 0.5 

 

set_input_delay -clock ADCLK_virt [expr $DATA_delay - $CLK_skew + $Tco_ADC] 

 

Method (2): 

 

set_clock_latency -source 0.5 [get_ports {ADCLK}]# ADCLK is the input port of the clock at FPGA 

 

set_input_delay -clock ADCLK_virt [expr $DATA_delay + $Tco_ADC] 

 

I attached the TimeQuest results of the two timing netlists generated by these two methods. 

 

As expected, in the STA result of the timing netlist generated by method (2), both the input data delay and clock source latency are increased by 0.5 compared to those of method (1), and they cancel off in the calculation of setup slack. 

 

But method (1) reports a better timing result with a bigger setup slack (2.135).  

 

The full path details show that some logics are placed in different locations with these two methods (FF1_X1_Y8_N21 vs FF_X2_Y13_N9), resulting the difference in data arrival path timing and data required path timing. 

 

So I think these two methods do drive the compiler differently when used as timing constraints to a design and they are not 100% equivalent. 

 

Can anybody give a further explanation? 

 

Thank you!
0 Kudos
6 Replies
Altera_Forum
Honored Contributor II
1,127 Views

Imagine I am TimeQuest! and I was told: 

 

Method (1): set_input_delay -clock ADCLK_virt [expr $DATA_delay - 0.5 + $Tco_ADC] 

Method (2): set_input_delay -clock ADCLK_virt [expr $DATA_delay + $Tco_ADC] 

 

They are certainly different by that -0.5 

 

The previous statements of 

Method (1): set ADCLK_skew 0.5# I just replace it 

Method (2): set_clk_latency -source 0.5 [get_ports {ADCLK}] # does not change anything 

 

My recollection of set_clk_latency is that it is part of Synopsis original approach for ASICs and got moved over to FPGA without any use.
0 Kudos
Altera_Forum
Honored Contributor II
1,127 Views

Thanks for replying, Kaz. Yes, obviously, the input delay of method (2) is 0.5 bigger than that of method (1). But, so is the source latency component in the data required path, as shown in my attached timing report. (method (2) reports 0.5 in its source latency while method (1) reports 0). So the 0.5 difference actually gets cancelled off in the calculation of setup slack and it does not have an effect on the final result. 

 

What really makes a difference to the final result is some other value (Inter Connection) in the timing path, which I marked with a red box in the attached timing report. It seems that the same logic element is put in different locations by these two methods 

(FF_X1_Y8_N21 in method (1) vs FF_X2_Y13_N9 in method (2)), which results in different Inter Connection delays and the final slacks. 

 

Since the compiler behaves differently with these two methods as constraints, which method is preferred if we want to achieve a better timing result? 

 

Does method (1) (using clock skew instead of set clock latency) always achieves a better timing result than method (2), or vice versa?
0 Kudos
Altera_Forum
Honored Contributor II
1,127 Views

Having read back your constraints I got a bit lost.  

I know set_input_delay is used for data with reference to a clock(actual clock or virtual clock) but I can't see your data. 

Is this another use of this constraint?
0 Kudos
Altera_Forum
Honored Contributor II
1,127 Views

wdshen, 

the fitter uses a random seed. It's normal for it not to yield the same result on successive compilations. 

But if you run the same fitted design through TimeQuest (no fitting, just change the .sdc and re-run TQ), both methodologies should give the same result. They do for me.
0 Kudos
Altera_Forum
Honored Contributor II
1,127 Views

Hi Kaz, 

 

Sorry for the confusion. I didn't write down the complete set_input_delay syntax in the post. Yes, there is an input port associated with it in my actual SDC.  

 

My focus here is to discuss the use of two methods, so I omitted the target of set_input_delay, assuming we all know it is applied to an input port. 

 

Hi rbugalho, 

 

That is exactly my point. I think these two methods are equivalent only when we use them in timing verification. But when we use them as timing constraints to direct Quartus to compile a design, they are different. Because Quartus generates different timing netlists with these two methods. 

 

As for the random seed, I think you are talking about the value we can put in under Fitter Settings tab,right? But normally we don't need to change that number, unless the compilation can't meet the timing requirement no matter how we tweak our constraints and source codes, so we need to give a try with another seed number. 

 

With the same constraints and the same source code, we should always get the same result on successive compilations. The result is repeatable. I think I verified this with experiments.
0 Kudos
Altera_Forum
Honored Contributor II
1,127 Views

Let reinstate my view. case 1 tells TimeQuest that data is offset from its clock launch edge by ([expr $DATA_delay - 0.5 + $Tco_ADC]). 

case2 tells TimeQuest that data is offset from its clock launch edge by ([expr $DATA_delay + $Tco_ADC]). Although you specify clock latency but that is applied to a clock called something else and I don't know how this new clock name will be related to data clock (virtual or otherwise). 

 

I personally wouldn't care about using latency. Moreover different results may also occur if you realise that timequest does not target theoretical optimum point but it targets passing timing and stops there and so your verdict is best gauged if there is pass/fail difference.
0 Kudos
Reply