- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Attached is the setup check by the timeQuest. From the data path report pane, the clock network delay of the data required path is negative. What does this mean? Also, it seems to me that Data Required Time is calculated using the following.
Data Required Time = Clock Arrival Time + uTsu. This is completely different from its original definition as follows. Data Required Time = Clock Arrival Time - uTsu - Setup UncertaintyLink Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
When you create the timing report with "full path" option, you may be able to find what happens.
Thanks,- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The uTsu must be a negative number. Note that these are not always pure numbers. For example, they register might theoretically have a uTsu and uTh, but that's always in relation to the clock coming into it. There's no black-and-white point where that clock comes in, so maybe they're rolling more or less of the register clock delay into that uTsu/uTh. I've seen some stuff like that done over the years, and though they try to avoid it, if the final answer is correct that's the important thing.
(But yes, in theory the uTsu should shorten your data required time, making it harder to meet setup timing. Again, in practice may deviate a little from in theory...). I tend to concentrate on the numbers I can control, clock relationships, levels of logic and routing delays.- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am confused by timequest timing analyser too
1. ------------------------------------------ QII gives the following critical warning: Critical Warning: The following clock transfers have no clock uncertainty assignment. For more accurate results, apply clock uncertainty assignments or use the derive_clock_uncertainty command. Critical Warning: From CLK24 (Rise) to CLK24 (Rise) (hold) However, the SDC file definitely has the derive_clock_uncertainty command. What should I do? Should I add this command second time? 2. ------------------------------------------ sdc and timequest api reference manual (http://www.altera.com/literature/manual/mnl_sdctmq.pdf) does not describe virtual clocks. The create_clock command description states that <target> must be specified: create_clock [-add] [-name <clock_name>] -period <value> [-waveform <edge_list>] <targets> I am confused with virtual clocks. Somewhere I've red that I should use virtual clocks in the set_input_delay and set_output_delay commands. But I am not sure that I've created virtual clocks correctly. Should virtual clocks somehow relate to real clocks? 3. ------------------------------------------ Some SDC commands can use -add or -add_delay options. I am not sure when should I use this option. For instance, set_input_delay description in the API RefMan reads -add_delay: Add to existing delays instead of overriding them Does it mean that I can use only the very first set_input_delay command without -add_delay option?- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
1. No idea. Are other clocks getting uncertainty added? Look at the comments to see this assignment called out, and then expand it to see the uncertainty it adds(there will be a lot of them). I've never seen this before though.
2. The <target> does not have to be in there, as I do that all the time. I create virtual clocks solely for I/O constraints, basically creating the virtual clock and then use that clock in the set_input/output_delay -clock option. Most of the time that clock will look identical to the clock coming into the FPGA. So why do it? - It makes more sense when you understand set_input/output_delay constraints. They are not directly constraining the I/O paths in the FPGA. Instead they describe an external register, what it's clocked by, and how long it takes to send data to the FPGA. Armed with that information, TimeQuest can then figure out what to make the I/O paths look like. - Allows the user the ability to change that external clock. For example, I could change it's duty cycle, or give it more uncertainty, or give it a set_clock_latency, or change it's -waveform, without having to change the clock coming into the FPGA. THis provides flexibility. - Easier timing reports. Rather than having to name all the I/O, my timing .tcl file might have something like: report_timing -setup -from_clock ext_clk_virt -npaths 50 -detail full_path -panel_name "s: Inputs ext_clk_virt" report_timing -hold -from_clock ext_clk_virt -npaths 50 -detail full_path -panel_name "h: Inputs ext_clk_virt" This will report input setup and hold timing on all inputs clocked by this virtual clock. If I add a new input port that uses this clock, it will automatically be reported. - And most importantly, the derive_clock_uncertainty will do the right thing. If you use an internal clock as the same clock for your set_input/output_delay constraints, the derive_clock_uncertainty can only add one setup and hold uncertainty value when that clock is the source and destination, and as such, it will use the uncertainty values when this is inside the device. If you create a virtual clock, it now knows there is a different clock and will calculate a different uncertainty. This is probably the most important one(although admittedly the difference in uncertainty will be small) 3. For input/output constraints I use -add_delay for double-data rate interfaces. Basically it means you are desribing a second external register that the I/O port connects to. So for DDR it might be something like: set_output_delay -clock ddr_ext_clk -max 0.5 [get_ports ddr_out*] set_output_delay -clock ddr_ext_clk -min 0.5 [get_ports ddr_out*] set_output_delay -clock ddr_ext_clk -max 0.5 [get_ports ddr_out*] -clock_fall -add_delay set_output_delay -clock ddr_ext_clk -min 0.5 [get_ports ddr_out*] -clock_fall -add_delay I believe I've posted a document about source-synchronous DDR constraints, which would help explain this. (But yes, the very first one does not use -add_delay. I think the write SDC command, which I generally don't recommend, appends -add_delay to all of them because it just doesn't know, and doesn't really hurt if one didn't exist. And you'll also note I didn't need it for the -min option if there was a -max, since those two are constraining two different sides and would not overwrite each other.)- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Well i am still really confused how to add constraints to achieve a particular goal, i have read many posts on altera forum discussing the constraints assignments ( and gone through the training + documents ) to reach design goals but still find it a bit obscure.
1. I think as a beginner i would like to see how a design is constrained using a simple example and i hope that others will help me out. 2. My objective in this case is to design a simple multiplier to work (a) at a specific frequency e.g 100Mhz (b) find out the maximum frequency of operation of the multiplier 3. My simple design in verilog is as follows,
module mul1( clk,
reset,
dataa,
datab,
start,
done,
result);
input clk;
input reset;
input dataa;
input datab;
input start;
output done;
output result;
reg done;
reg result;
always @(posedge clk) begin
if (reset) begin
result <= 32'b0;
done <= 0;
end
else begin
if (start) begin
result <= dataa * datab;
done <= 1; // done asserted when result ready
end
else begin
if (done) begin
done <=0;
end
end
end
end
endmodule
4. The code can be synthesized using DSP's or LUT's but in this case let us assume that we need the fastest implementation so we opt for the DSP based multipliers. 5. I think single cycle constraints are the easiest so we should constrain using them first (I DONT THINK MUTI CYCLE CONSTRAINTS ARE APPLICABLE TO THIS PARTICULAR DESIGN ? ) 6. Assuming that we want to see if the design will work at 100Mhz, i have made a sample sdc file
create_clock -period 10.000 -name real_clock
create_clock -name ext_clk -period 10.0
set_input_delay -clock ext_clk -max 4.0
set_input_delay -clock ext_clk -min 1.0
set_input_delay -clock ext_clk -max 4.0
set_input_delay -clock ext_clk -min 1.0
set_input_delay -clock ext_clk -max 4.0
set_input_delay -clock ext_clk -min 1.0
set_output_delay -clock ext_clk -min 4.0
set_output_delay -clock ext_clk -max 1.0
set_output_delay -clock ext_clk -min 4.0
set_output_delay -clock ext_clk -max 1.0
7. I have taken the 10ns period from the 100Mhz requirement but have put the input and output delay values at random ? , i do not understand the strategy for choosing these values?. 8. A sample line from the worst-case timing path for set up is as follows slack from Node to node launch clk latch clk relationship clk_skew ------ ------------- ---------- ------------ ---------- -------------- --------- -3.609 dataa[1] result[62]~reg0 ext_clk real_clock 10.000 3.457 data delay ---------- 13.084 9. In the above design i dont understand why timequest takes the ext_clk as the launch edge clk and real_clk as the latch edge clk ? , and what does relationship column mean ? 10. TO ask something more fundamental , is it correct to translate our frequency spec to input / output delays ? I hope you guys can help me out Thanks Silva
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Your setup relationship is the requirement between the two clocks. (In this case it is always 10ns). Note that I/O requirements get reported just like internal paths.
Let's look at your first set_input_delay constraint. Remember to think of set_input_delay like a circuit description. It says there is an external register driving data into data*. That register is clocked by ext_clk and it takes between 1-4ns to get to the data* port. Since the external clock is the same as the internal clock, they have a 10ns relationship. (Draw the clock waveforms, and you'll see that when clk_ext launches data, it needs to get to clk within 10ns.) Now 4ns of that is used externally, so your leaving 6ns for the FPGA delay, i.e. the FPGA's data delay - clock delay to the internal register must be less than 6ns. Remember that this is hypothetical. In a real design, your multiplier is driven by something. So if the external device had a Tco of 3.5ns, and the board delay was 0.5ns, then your external delay is 4ns. That makes sense since you've really described what's going on outside the FPGA. I believe you're case is made up, and so it's confusing how 4ns is the value you came up with. On the hold side, look at your clock waveforms. The data sent from clk_ext must not get to clk in less than 0ns. So that's your hold relationship. Since the external delay is 1ns, the only way the FPGA could fail timing is if it's delay was -1ns, i.e. it's data delay - clock delay to the register was -1ns. Finally, on your output delays you seem to have switched max and min values, which I don't get. Your -max should really be larger than your -min. If your max was 4ns, then you'd do the same thing. You have a 10ns setup relationship, 4ns is used external, and hence the FPGA must get it's data out in 6ns(kind of like a 6ns Tco).
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page