Programmable Devices
CPLDs, FPGAs, SoC FPGAs, Configuration, and Transceivers
20641 Discussions

set_max_delay vs set_net_delay

Altera_Forum
Honored Contributor II
5,463 Views

Hi, 

 

I haven't understood clearly the difference between set_max_delay and set_net_delay and the use case in constraining the design.  

What I can understand from the quartushelp is that the only difference between set_max_delay and set_net_delay is that, max and min value can be set using set_net_delay whereas only max constraint can be provided using set_max_delay.
0 Kudos
4 Replies
Altera_Forum
Honored Contributor II
3,333 Views

set_max/min_delay includes clock skew, while set_net_delay does not. (I'm also not sure if set_net_delay goes through LUTs or is only for reg to reg paths. I never use set_net_delay).  

set_max/min_delay is basically telling TQ what the setup or hold relationship should be. For example, if you have a 10ns clock and you wanted to say a particular path should have 2 cycles, you would normally do a multicycle -setup 2 and multicycle -hold 1, to make the setup relationship 20ns and hold of 0ns. You could also do set_max_delay 20 and get an identical analysis. The benefit of set_max/min_delay is that you can give it any number you want, so if you had two registers after an asynchronous transfer and wanted to overconstrain the path to give time for the metastability to settle, you can directly do a set_max_delay of whatever value you want. 

 

There is also a set_data_delay that is new, and used to constrain just the data path without worrying about clock delays. Technically, this doesn't make sense. For example, let's say your source clock delay was 3ns and your data path was 6ns, so the data arrives at the latch clock at time 9ns. If you ignore clock delays and have a set_data_delay(or set_net_delay) of 6ns, it meets timing. Now let's say the data path moves to 7ns but the clock path is 2ns. The data arrives at the latch register at the exact same time and yet it now fails timing. In reality, most clocks are on global clock trees with fixed delays, so there is an assumption it won't change that is fairly valid, but it's still a bit iffy.
0 Kudos
Altera_Forum
Honored Contributor II
3,333 Views

Rysc, 

 

I can't find any documentation on set_data_delay in the quartus 17 tools or online documentation. Do you know of a source for that documentation? 

I'm trying to find a good reference for constraining clock domain crossing paths. From what I understand, set_false_paths between clock domains is not the preferred way. You are supposed to use a combination of set_max_delay, set_min_delay, set_max_skew, and set_net_delay commands. 

 

In Xilinx, the way I've seen CDC paths is to use a set_max_delay -datapath_only constraint. But, Altera doesn't have a -datapath_only switch for it's set_max_delay constraint. So, I assume that in order for the cdc paths to be constrained correctly, you have to do something to 'ignore' the clock skew. 

 

This is why I was interested when I saw your post about set_data_delay - and was wondering if this was the Altera equivalent of the Xilinx set_max_delay -datapath_only constraint. Seems like this would be a much simpler way to constrain cdc paths, than to have to do the set_max_delay, set_min_delay, set_max_skew, and set_net_delay constraints for ever cdc. 

 

I'm assuming this thread best describes the way to constrain cdc paths (although I don't have a special 'map_cdc' module that I use in my design - was hoping I could just specify all paths between CLKA and CLKB): 

 

https://alteraforum.com/forum/showthread.php?t=55835
0 Kudos
Altera_Forum
Honored Contributor II
3,333 Views

Yeah, I don't see anything on set_data_delay either. Even typing it in TimeQuest doesn't give much info.  

That combination of set_max/min/skew/net_delays is for FIFO crossings, where the gray code count is passed from one domain to the other, and if there is more than one period of clock skew, the gray code value received will be wrong and the FIFO will fail. That is not a general recommendation, and I have not seen anyone give a recommendation that everyone follows. Most designs cut timing between the unrelated domains without problem. In general, if designed right this is fine. Signals crossing them should not rely on any known relationship.  

I hear the complaint that if you false path it(set_clock_groups), then the delay could be anything. How do you know it will work if the delay is 100ns? What about 1000ns? That should never happen, but when it's false path'd you'll never know. The reverse argument is that I ask them at what value it will fail at, and they never know. I think if anything it's more of a comfort factor to bound those paths, although most users don't have any strict requirements and don't know what it should be bound too. (And if they do, that means they've designed for a certain situation and they need to constrain for it.) 

Anyway, I did an analysis of the constraint a year or two ago and did this write-up. I haven't checked it since: 

 

Some quick notes: 

1) Since it co-exists with setup and hold checks, I think there are two ways to use this constraint: 

Option 1: 

set_data_delay -from {two*} -to {three*} 3.2 

set_clock_groups -asynchronous -group {clkA} -group {clkB} 

The set_clock_groups cuts setup and hold analysis on the path and leaves set_data_delay to do the analysis. 

 

Option 2: 

set_data_delay -from {two*} -to {three*} 3.2 

set_max_delay -from {two*} -to {three*} 50.0 

set_min_delay -from {two*} -to {three*} -50.0 

The set_min/max_delay losens the setup and hold analysis on the path until the set_data_delay is what drives the path analysis. 

 

In the project’s test.sdc, I added these options plus a few other examples that do not work and why, specifically: 

- Doing a set_data_delay by itself will usually not work, since the hold check still exists and the default setup check could still have priority. 

- Adding a set_false_path, since that has priority and will cut the set_data_delay assignment too. 

- Doing a set_data_delay between clock domains does not work either. 

 

2) There is no special report_data_delay or anything like that. Instead, it shows up in normal “report_timing -setup…”, except the clock delays are zero’d out. 

 

3) If I were to ask for any changes: 

- I would like to see it work for clocks, e.g.: 

set_data_delay -from [get_clocks clkA] -to [get_clocks clkB] 3.2 

NOTE - Here is a reply from developer to this last issue: 

I looked into Rysc's clock-to-clock constraint, and there is a TimeQuest bug that prevents the set_data_delay analysis from turning on unless there is at least one node-based set_data_delay constraint in the design. The easiest way to work around this if you only want clock-to-clock constraints is to add a dummy constraint to an input port, which will be accepted by the constraint parser, but won’t result in any valid timing paths – e.g. “set_data_delay -to din[0] 1”. 

Note 2 - This is very old and I believe has been fixed, but not verified.
Altera_Forum
Honored Contributor II
3,333 Views

Rysc, 

Thanks for the writeup. So, if I add set_clock_groups -asynchronous between clock domains, that cuts the setup/hold analysis between cdc paths, and design the cdc paths correctly, I should be good? 

I do have multi-bit counters and FSM states, that I am using in another clock domain. For the FSM states, I've added a command to the .qsf to tell the tools to encode FSM states in gray code. The multi-bit counters I run through a bin2gray -> cdc regs -> gray to bin, so I'm wondering if I need to use the set max/min/net/skew on those paths? Will the set_clock_groups -asynchronous take precedence over those max/min/net/skew constraints?
0 Kudos
Reply