Intel® Quartus® Prime Software
Intel® Quartus® Prime Design Software, Design Entry, Synthesis, Simulation, Verification, Timing Analysis, System Design (Platform Designer, formerly Qsys)
16629 Discussions

Both sides source-synchronous interface

paw_93
Novice
142 Views

Hi,
I have a question regarding the way the sdc timing constraints affect the design. Imagine a situation where we have a source synchronous design. FPGA drives flip-flop on the ASIC side, then ASIC propagates back both data and clock. Simple and standard situation.


I see two ways to handle it.

1) Send a clock and data on the falling edge, receive and latch with a rising edge on the ASIC side (so that we sample data in the middle) and then send back data to FPGA on the rising edge and sample there on the falling edge. Then we constrain the design even with a virtual clock, we kind of know that FPGA will receive data on a proper moment and sample it in the middle.

 

create_clock -name clkin -period 100MHz [get_ports {clkin}] -waveform {0ns 5ns}

set_clock_uncertainty -from clkin 0.1

set_input_delay -max -clock clkin [expr {$someCalcMax1}] [get_ports {datain}]

set_input_delay -min -clock clkin [expr {$someCalcMin1}] [get_ports {datain}]

set_output_delay -max -clock clkout -clock_fall [expr {$someCalkMax2}] [get_ports {dataout}]
set_output_delay -min -clock clkout -clock_fall [expr {$someCalkMin2}] [get_ports {dataout}]

 


2) Second way. We always make a rising-edge latching, on both FPGA and ASIC side, but we need to constrain it properly. Making source synchronous constraining from FPGA to ASIC and ASIC to FPGA separately might be not enough if the paths on PCB/ASIC are not well balanced (so that data doesn't come in the same moment when clock changes or there is some clock skew/slew rate problem). So we can add dependency between the FPGA-ASIC clock (clockout) and clockin - jitter, delay, then latency for the data path.

2.1 ) Full path constrain

 

set interface_clk "*|altera_iopll_i|*_pll|outclk[0]"

# Clock delay on PCB trace:
set T_IF_CLK_PCBmin 0.3
set T_IF_CLK_PCBmax 0.8

# Data delays on PCB trace
set T_IF_DTA_PCBmin 0.2
set T_IF_DTA_PCBmax 0.5

create_generated_clock [get_ports {clkout}] -name OUT_IF_CLK -divide_by 1 -source $interface_clk
create_generated_clock [get_ports {clkin}] -name IN_IF_CLK -divide_by 1 -source [get_ports {clkout}] # should indicate it is a clock derived from OUT_IF_CLK

# Adds latency caused by PCB to clkout initial latency and so it makes up clkin latency
set_clock_latency [get_clocks IN_IF_CLK] -source -late $T_IF_CLK_PCBmax
set_clock_latency [get_clocks IN_IF_CLK] -source -early $T_IF_CLK_PCBmin

set_clock_uncertainty -from [get_clocks IN_IF_CLK] 0.01

set_input_delay -max -clock OUT_IF_CLK [expr {$T_IF_COmax + $T_IF_DTA_PCBmax + $T_IF_CLK_PCBmax}] [get_ports {datain}]
set_input_delay -min -clock OUT_IF_CLK [expr {$T_IF_COmin + $T_IF_DTA_PCBmin + $T_IF_CLK_PCBmin}] [get_ports {datain}]

 

Let me know if there is something wrong with these constraints.

2.2 ) Constrain only the input interface (cutting bounds with FPGA as a source of internal ASIC clock)

 

 

create_clock -name clkin -period 100MHz [get_ports {clkin}] -waveform {0ns 5ns}

set_clock_uncertainty -from clkin 0.1

set_input_delay -max -clock clkin [expr {$someCalcMax1}] [get_ports {datain}]

set_input_delay -min -clock clkin [expr {$someCalcMin1}] [get_ports {datain}]

set_output_delay -max -clock clkout [expr {$someCalkMax2}] [get_ports {dataout}]
set_output_delay -min -clock clkout [expr {$someCalkMin2}] [get_ports {dataout}]

 


The questions are:

1) If we take the second way, will FPGA compensate for needed latencies (even if we use fast input/fast output registers)? How? If PCB traces are badly aligned or clk jitter is very high will FPGA dev tools try to compensate/optimise?

2) Since I had to struggle with the first approach - what are pros of that (I see only latency, its half cycle transfer so may affect the timing negatively)? Is it a better way to constrain when we don't know the details about latencies in the external device? Does it really enforce sampling in the middle of data (in combination with vhdl, where proper latching is ensured by falling_edge instruction)?

3) are fast input/fast output registers subject to any timing adjustment? From Intel documentation "to lock the input register in the LAB adjacent to the I/O cell feeding it". Should I understand it will not add delay if needed?

4) In case of second approach, should I disable fast input/fast ouput register assignment for better alignment?

5) In cyclone V one could control delay (in ps) for many pins, in Cyclone 10 it is no more possible dynamically, one has to use assignments and resynthesize. So there was a way to fine tune the interfaces, now it is no more as easy. The question that comes to my mind - should we think of timing constraints as the ones that will do the job so that the programmable delay is no more needed? Do they force the fitting tool to make things done as needed (e.g. by adding longer traces in FPGA) or they are just a recommendation and if a given placement made a particular path that violates timing we need to read the report and add intervene?

 

syncinterf.png

Labels (2)
0 Kudos
1 Reply
RichardTanSY_Intel
105 Views

It appears that you have posted two identical cases on the forum. 

To ensure a more streamlined and efficient resolution process, we kindly request you to consolidate your inquiries into a single forum post.


We will continue the support in the other forum case (link below) and will transition this thread to community support. 

https://community.intel.com/t5/Programmable-Devices/How-well-FPGA-adjust-delays-based-on-sdc/m-p/1592583#M95785


If you have any further questions or concerns, please don't hesitate to reach out. 

Thank you and have a great day!


Regards,

Richard Tan


0 Kudos
Reply