Intel® Quartus® Prime Software
Intel® Quartus® Prime Design Software, Design Entry, Synthesis, Simulation, Verification, Timing Analysis, System Design (Platform Designer, formerly Qsys)
16644 Discussions

So much more designing with Incremental Design?

Altera_Forum
Honored Contributor II
1,193 Views

Okay guys you were a lot of help with getting me started in ID. I still have questions, such as an earlier thread today, but it is turning a failure into a success. 

 

Just to note: 

I had posted a way to actually grab Global Resources in a device. Although this worked with the old Logic Lock ID seems to ignore it. 

 

Now I fall far short in my knowledge. It is pretty well explained how to take the steps to do ID in a bottom up flow but all you've all told me is that the top down flow is the way to go. As I read all of your inputs there appears to be something I'm not getting. 

So, if you all would please, treat me like an idiot and explain to me just how to work the top down flow. (Some details please, remember I'm pretty dumb!) :o
0 Kudos
5 Replies
Altera_Forum
Honored Contributor II
462 Views

You mean top-down to do a pseudo-bottom-up flow? (I guess the terminology is the first place for improvement.) Anyway, I'm a big fan of that flow and have used it a number of times. 

 

1) Create partitions. For simplicity sake, we have a module called top, which instantiates hierarchies A, B, C, D and E. Let's say I put partitions on A-D but not E. (Your top-level is always a partition) 

 

2) I then set Partitions B-D to Empty and have A and Top as source. Compile. 

The fitter will fit all of Top/E and /A. 

 

At this point I have some options, depending on my goals. 

 

3a) Set A to Post-Fit Placement, Set B to Source, and leave Top at Source. This will keep the placement of A, and refit Top, B and E around As placement. The nice thing about this is B now knows how A is placed, and can make decisisions based on that. So if you have a critical path going between B and A, it can place B's nodes in the best manner to accomodate that critical path. (Whether Top is set to Source or Post-Fit is up to the user and design dependent. I'm more just trying to show some of the things you can do). Continue in this fashion, with multiple runs setting C and then D to source and the other partitions to Post-Fit. Note that if you don't floorplan, by the time you get to D, it will probably have a pretty holey floorplan to deal with(since A-D are post-fit and spread across it.) If you have room and/or D is not timing critical, this might work without floorplanning. 

 

3b) Set A to Empty, set Top as Post-Synthesis(or Post-Fit) and set B to Source. This basically let's the fitter work on B now, and not A. You absolutely have to floorplan with this flow, because the fitter now doesn't know where A was placed and may set a node in B in an identical place as A, which naturally won't work. After B is fit, set it to Empty and continue to C, where it is now Source. Compile, then set C to Empty and D to Source. Compile. Now set A, B, C and D to Post-Fit and recompile. Even though they had been set to Empty, Quartus still has the info from when they were last fit, and they will now all have post-fit information. I had a design with a module repeated 12 times, and very little interaction between them, none of it timing critical. I had a script that basically went through all 12 of them, fitting them into their LogicLock region while the other 11 were empty, and then a final compile where all 12 were post-fit, and voila, they all appeared with the same placement. 

 

3c) A slight variation. On the first pass, A and Top are source, while B-D are Empty. Let's make E a partition and set it to Empty also. So I now fit A and get it to meet timing. On my second pass I delete the partitions on B-E and everything gets absorbed into the top-partition on the next synthesis pass. This is nice when I really only wanted to isolate/concentrate on A and didn't want separate partitions on the other hierarchies. 

 

Hopefull those are some ideas to help get you going.
0 Kudos
Altera_Forum
Honored Contributor II
462 Views

Thanks, I've printed this out and will try it. 

 

The thing that I think (?) I'm missing is where the documentation talks about being able to keep/meet timing between partitions with the top down flow vs. the bottom up. 

Some of the things I've been reading (and reading into?) is that with the top down flow the top level is passed to lower level designers and they work on the lower level design using the ...? and there I go from information to guess work. 

 

There are a couple things like this even in the document on ID. I do have an SR into Altera on it and apparently they know what I'm talking about but don't have ready examples, etc., and are working on them.
0 Kudos
Altera_Forum
Honored Contributor II
462 Views

The first recommendation is to register all inputs and outputs of partitions. 

If not possible(and it almost never is), is to register the outputs of your partitions. Even that's hard. 

 

My personal opinion is you need to architect/design so that critical paths are not going between partitions. If you have lots of them, it can be really tricky. I believe you can: 

a) put IO constraints on inputs/outputs of partitions. This lets you tell it how critical it is. 

b) A lut is created to represent what the fan-out/fan-in of a partition is when connected to an empty partition. So if you're just optimizing hierarchy A and the hierarchies it connects to are empty, you could put the node that an output of A feeds into the location where B will go and then apply a tight timing constraint. I believe the Project -> Generate Bottom-Up Design Partition Scripts can do some of this, but at a generic level(i.e. one constraint value for all input). I have never done this and again, recommend avoiding it, as it seems to easy to get lost in the weeds.  

I had one design in an EP2S180 where the critical paths were all these control/data logic criss-crossing between all the sub-blocks in the design(like a big cross-point switch.) I told them we could not use incremental compilation because everything was just too dependent on everything else, and trying to split the problem into smaller problems would always break when we brought everything together. So we had to do flat compiles every time(and those barely met timing). 

By the way, you're doing incremental compilation to preserve performance on tight timing modules correct? (i.e. you're not just trying to reduce compile times). Is it just in a few blocks or lots of them, what percentage of your design has trouble meeting timing?
0 Kudos
Altera_Forum
Honored Contributor II
462 Views

thanks. 

Yeah, I've been pretty good with registering inputs and outputs in the design partitions but I do have those signals that I have to act on immediately and can't wait for a clock edge, it'll be too late even at 500MHz. 

 

Yes, the idea is to preserve timing in the lower level partitions. I don't really care about compile times. 

Most of my design is meeting timing pretty well. I have some issues that I inquire on in another thread but I'm working on them as I write. I already know it will all meet timing I just need to check it thoroughly.
0 Kudos
Altera_Forum
Honored Contributor II
462 Views

Re: your comment "Some of the things I've been reading (and reading into?) is that with the top down flow the top level is passed to lower level designers and they work on the lower level design using the ...? and there I go from information to guess work." 

 

I think this is that terminology thing that Rysc mentioned... 

 

In the docs, the Altera-defined top-down flow is all within one project. It assume that even if different designers are working on RTL for different modules, everything will be compiled in one top-level project for placement & routing. You can use Rysc's methods of targetting one partition at a time using Empty and possibly LogicLock regions to create a floorplan. He called that "top-down to do a pseudo-bottom-up flow". In a top-down flow, the fitter sees everything at once, as long as the partition is not set to Empty. So all the timing and placement info is kept in one project and you don't need to pass anyone anything. 

 

The bottom-up flow is the one where you have several designers working in different Quartus II projects because they want to do their own placement optimization at the "bottom" before the send it "up" to the top. That's the flow where the lower-level designers need some info about the top-level and about other partitions, so they know where to place their block, how it interfaces with other things, etc. There is an option called "Generate bottom-up design partition scripts" which can pass info to lower-level designers, like you mentioned. They can use the script to create a new project with assignments from the top-level designer, like a LogicLock region as a boundary for all their logic etc. Then they work on their project by themselves, make it meet timing, then preserve the results by exporting a Quartus II Exported Partition file (.qxp) with a Post-fit netlist (placement and optionally routing). The top-level designer imports the QXP from the bottom, and brings it into the top project as one of the partitions and it uses the placement that was specified in the QXP. 

 

Maybe that helps with terminology!
0 Kudos
Reply