Programmable Devices
CPLDs, FPGAs, SoC FPGAs, Configuration, and Transceivers
20641 Discussions

Constraining source syncronous output

Altera_Forum
Honored Contributor II
1,463 Views

Hi all, 

 

I'm trying to understand the constraining of source synchronous interfaces. I am a little bit confused. Let’s say we have an interface to an external SDR SDRAM device. The latch clock for the SDRAM is phase shifted by 180° inside the fpga with respect to the launch clock of the output register. Now we want to define the max output delay. In different documents i found the following equation: 

 

max output delay = data_trace_delay_max - clk_trace_delay_min + set_up_time 

 

And then i heard in an online training video from altera, that the max output delay specifies the maximum amount of time available to output a signal and still meet the setup time of the external device.  

 

But why becomes this available time bigger, when the data trace delay becomes longer? Would a bigger trace not mean that we have less time available to bring our data to the output of the fpga? Or do I have misunderstood something? 

 

I'd appreciate if someone could resolve my confusion 

 

thanks in advance!
0 Kudos
6 Replies
Altera_Forum
Honored Contributor II
469 Views

Hi Rookie2017, 

 

you're right that the maximum output delay constraint is there to guarantee the setup time at the SDRAM's input register. 

 

Think about maximum output delay as "how long must the launched data signal be stable (at the fpga's data output pin) before the launch clock edge (at the fpga's clock output pin)". 

 

In the initial case, assume that your clock and data traces just have the right length, so that the term data_trace_delay_max - clk_trace_delay_min becomes zero, and therefore max output delay = set_up_time. Makes sense, right? The launched signal must be stable before the launch clock, so that the setup time of the SDRAM register is not violated. That's no problem since you already added a 180° phase shift to the clock. 

 

Now when your data trace becomes longer (but the clock trace remained at the same length), it arrives later at the SDRAM. It comes closer to the clock edge, and therefore it grows into the region where the SDRAM's setup time might be violated. Logically, the maximum output delay must become larger, i.e. the FPGA must somehow ensure the launched data signal is stable for a bit longer before the clock edge comes. 

 

How can you achieve that? Either the fitter plays around with the output delay of that pad (if available), or you have to change the 180° to something else. 

 

I think the reason for your confusion is that you think about the maximum output delay as an "available" time. It's easier to think about it as a "required" time. Also, the min/max output delays do not actually define the time between the clock edge and the data transition; they rather define a time window around the clock edge in which the data must not change. 

 

 

Best regards, 

GooGooCluster
0 Kudos
Altera_Forum
Honored Contributor II
469 Views

Hi GooGooCluster 

 

Thank you very much for your good answer. Your explanation is perfectly understandable but i have still a few questions: 

 

1) 

 

--- Quote Start ---  

 

 

"how long must the launched data signal be stable (at the fpga's data output pin) before the launch clock edge (at the fpga's clock output pin)". 

 

 

--- Quote End ---  

 

 

 

Just about the terms: Here you speak of the launch clock. Wouldn't this rather be the latch/capture clock? And the launch clock would be the input clock of the output register inside the FPGA?  

 

 

2) 

 

--- Quote Start ---  

 

In the initial case, assume that your clock and data traces just have the right length, so that the term data_trace_delay_max - clk_trace_delay_min becomes zero, and therefore max output delay = set_up_time. Makes sense, right? The launched signal must be stable before the launch clock, so that the setup time of the SDRAM register is not violated. That's no problem since you already added a 180° phase shift to the clock. 

 

--- Quote End ---  

 

 

So we agree that in this case the data have to be stable at the output of the fpga at least "set_up_time" before the latch/capture(in my terms) clock rises at the output of the FPGA. So why we talk about a delay equal to the setup_time? Why they don't call it an earliness or something like that? Since the Output has to be ready a specific time before the clock edge. 

 

best regards.
0 Kudos
Altera_Forum
Honored Contributor II
467 Views

Hi Rookie2017, 

 

 

--- Quote Start ---  

Just about the terms: Here you speak of the launch clock. Wouldn't this rather be the latch/capture clock? And the launch clock would be the input clock of the output register inside the FPGA? 

--- Quote End ---  

 

 

In the timing constraints you constrain the signals entering/leaving the FPGA. The FPGA has two pins that reach the SDRAM in this example: a clock leaves the FPGA through an I/O pin, and a data signal leaves the FPGA through another I/O pin. These I/O pins act as outputs, so they only have a launch clock. Latching is done in the SDRAM, which not concerned by the timing constraints. 

 

Think about it this way: the job of timing constraints is to say "dear FPGA, if you guarantee that these constraints are met, my external circuitry will work". So you have to view the problem from the other end: you want to guarantee the timing at the latching side, but you cannot, so you calculate backwards through the traces, and define how the launching side (the FPGA) must look like. 

 

 

--- Quote Start ---  

So we agree that in this case the data have to be stable at the output of the fpga at least "set_up_time" before the latch/capture(in my terms) clock rises at the output of the FPGA. So why we talk about a delay equal to the setup_time? Why they don't call it an earliness or something like that? Since the Output has to be ready a specific time before the clock edge. 

--- Quote End ---  

 

 

I totally agree that "maximum output delay" is a very non-intuitive name. However, I cannot tell you why it has this name. I guess they wanted to have an abstract name that does not imply any setup time and such, so that it's technology-agnostic. 

 

Another possibility: I mentioned earlier that I see this constraint as a required time, which defines the time window in which the FPGA's output signal must not change. Judging from other literature, this is a rather odd view I guess (even though it works). I guess whoever named this constraint had another view on it, which made more sense in that context. 

 

 

 

Best regards, 

GooGooCluster
0 Kudos
Altera_Forum
Honored Contributor II
467 Views

Hi GooGooCluster, 

 

Thanks, I guess you are right. 

 

I think i'm now at least able to choose the correct equations and the correct values for the different cases... Even if i don't understand why its called a delay ;) 

 

best regards
0 Kudos
Altera_Forum
Honored Contributor II
467 Views

 

--- Quote Start ---  

 

So why we talk about a delay equal to the setup_time? Why they don't call it an earliness or something like that? Since the Output has to be ready a specific time before the clock edge. 

 

--- Quote End ---  

 

 

set_output_delay max decides the late margin of transition and = tSU of external device assuming no board delay difference of data/clk 

set_output_delay min decides the early margin of transition and = -tH of external device, with same above assumption 

 

it is referred to latch edge while tCO (that decides early/late margin) is referred to launch edge 

 

to be more accurate: 

 

early margin = - min delay 

late margin = clock period – max delay
0 Kudos
Altera_Forum
Honored Contributor II
467 Views

I'm not sure which online training you've watched, but here are some specifically on this topic for others reading this: 

 

https://www.altera.com/support/training/catalog.html?coursetype=online&keywords=source synchronous
0 Kudos
Reply