Intel® Quartus® Prime Software
Intel® Quartus® Prime Design Software, Design Entry, Synthesis, Simulation, Verification, Timing Analysis, System Design (Platform Designer, formerly Qsys)

Simulator accuracy

Altera_Forum
Honored Contributor II
1,431 Views

Hello! 

 

I am designing with Quartus II Software for about 2 months now. First my focus was on learning VHDL. In the meantime, my design gets pretty complex and now I wonder, how accurate the Simulator timings are, if I only told in the project settings which device I use. 

 

If, e.g., I design a simple clock devider and watch the simulation result, I have a delay of about 8ns between the rising edge of the main clk (50MHz) and the devided clk when I use a Stratix EP1S10 device. 

After that, I put the devided clk on a pin of my NIOS Development Kit (which I do not use for NIOS at the moment) and measured the signals (main clk and devided clk). On the oscilloscope, I saw a difference of 12ns between the rising edges. 

 

Now, my questions are, to what point can I relay on the simulator results and what possibilities do I have in the enormous setting opportunities of Quartus II, to oversee the right ones for my setup (as mentioned: NIOS Devolopment Kit Stratix Edition with EP1S10 device)? Is it enough to only point to the device I use in the project settings, or are there more settings I have to set up right? 

 

Greets 

Maik
0 Kudos
4 Replies
Altera_Forum
Honored Contributor II
697 Views

It's probably best to start with what the timing model/s show. The most common model is the slow corner, which is what Altera says will be the worst case delay for each path. This accounts for many factors, with the main ones being PVT, Process, Voltage and Temperature. So if you got the slowest die out of the fab(which passes qualification), had it at the slowest(highest) temperature and the slowest(lowest) voltage allowed by the device specs, then it would be equal to or faster than this number. So consider these numbers to be a ceiling.  

 

There is also a slow corner analysis. This can be used in static timing analysis to analyze the "fastest" a delay can be. Is what you'll find is that there is a wide range of values the delay can be. For example, if you have a pure delay of 12ns in your simulation(which uses the slow corner), and the fast corner showed the delay at 6ns, then the delay could be anywhere between that and the design needs to be able to account for that. (It's impossible to create silicon that doesn't vary. All FPGA/ASIC designs deal with this...) 

 

Now, if you have two signals coming out at 12ns difference in the simulation, they probably won't have as much variance because you're not looking at raw delays, but at the difference in the delays. This means the PVT differences will track each other. This falls more into the realm of on-chip variation(OCV) analysis.  

 

If I lost you, that's all right for now. Static timing analysis can be extremely complex, but most designs can get away without having to grasp the complexities. 

 

Now my very first suggestion is to not gate the clock. This is one of the strongest recommendations I can make. Understandably there are situations where this must be done, but in your case you have a PLL which can be used to make a full speed and half speed clock. The edges will be very closely aligned(enough that you shouldn't have to worry about hold time violations), and the PLL will also reduce PVT variations in the clock tree(this is what a PLL does. It removes clock tree delays, variations, and does clock synthesis). 

 

Synchronous design is by far the easy to do timing analysis and when it can be done, is highly recommended. Most of that talk about timing models can be ignored. (It's still relevant at the IOs, but that's a whole other thread.) I probably haven't answered everything, but hopefully that's a glimpse of the affects at play here.
0 Kudos
Altera_Forum
Honored Contributor II
697 Views

Hello Rysc! 

 

Thank you for you answer. It is very informative! 

 

In the beginning of my efforts for my current project (which is also my first VHDL/FPGA project) I ask myself (and in a german forum) if it would be better to use a PLL. At that time my understanding of PLLs was very limited, and everybody told me, that it is no problem to devide (or as you say 'register') the main clock. 

I use the divided clock to drive a ADC at 25 MHz. Up to now, it works very good and I have no problems because I allways try to comprise all timinginformations,I have (out of the datasheets of my components and from the simulator results), into my design. 

 

Now, as the project gets more and more complex, I have some (I admit: combinative) signals that are just stable very close to the clock edge, when the result will be registered (2ns). Now I wonder, if I have to face massiv problems in the real world, without probably knowing, why. 

 

Greets 

Maik
0 Kudos
Altera_Forum
Honored Contributor II
697 Views

People often get away with using divide-by registers, but always use a PLL if you can. (In fact, if you truly need a combinatorial clock that is based off of a single main clock, I recommend putting everything on the main clock and creating logic to feed the clock enable, i.e. in your case the divide by two output would be used as a clock enable instead of a clock). Using a PLL(or single clock with clock enable) keeps your edges aligned, which allows a lot of the difficulties in timing analysis to be ignored. 

 

For IO timing(i.e. to your ADC), which timing engine are you using, Classic or TimeQuest? If you're just getting starting with timing analysis, I would recommend TimeQuest, since it's the way of the future. I think it's a little harder to get started(i.e. TAN does a lot of stuff behind the scenes for you), but once you've got your basic building blocks(clock constraints and IO constraints) going, it's much more powerful and you'll enjoy using it. 

 

For IO timing, make sure you do all the calculations. The whole point of slow/fast models is that you cover your basis, so once the numbers get tight(say 2ns in your case, but they can be much less than that), you can be sure your device will work in all conditions. What's the clocking scheme? If you're board level clock that feeds the FPGA also feeds the ADC, then you've got a pretty standard analysis. If the board clock is laid out for low clock skew, then you've got a single period for the ADC to get data off chip, across the board, and into the FPGA. If there is skew, you need to account for that. If your ADC sends a clock to the FPGA alongside the data, it gets more complicated. I believe there is a Source-Synchronous App Note for TimeQuest that was put on the web. That's close to being as complicated as it gets, but usually isn't too bad.
0 Kudos
Altera_Forum
Honored Contributor II
697 Views

Hello! 

 

Thanks again, Rysc! 

 

Well, I think I use the classic time analyser because I did not fill in some constraints by myself. . .  

 

 

--- Quote Start ---  

What's the clocking scheme? If you're board level clock that feeds the FPGA also feeds the ADC, then you've got a pretty standard analysis. If the board clock is laid out for low clock skew, then you've got a single period for the ADC to get data off chip, across the board, and into the FPGA. If there is skew, you need to account for that. If your ADC sends a clock to the FPGA alongside the data, it gets more complicated. I believe there is a Source-Synchronous App Note for TimeQuest that was put on the web. That's close to being as complicated as it gets, but usually isn't too bad. 

--- Quote End ---  

 

 

Now, it gets a little too deep . . . Maybe I have to think about that a little until I understand what you mean by this. . . . 

As far as I can tell you, I use the devided clock through a FPGA pin to feed the ADC. Then I wait two main clock cycles and then I fetch the data from the input pins of the FPGA who are connected to the output pins of the ADC. As I said, this works pretty fine for my project . . . 

 

Maik
0 Kudos
Reply