- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
After 20 minutes it was still running, so I stopped the flow and ran it in the GUI instead:
History TAB of timing analyzer GUI:
qsta_utility::auto_CRU "create_timing_netlist -snapshot final -model slow"
Console TAB:
This flow below takes the vast majority of time (after Successfully loaded final database: elapsed time is 00:00:12.):
*******************************************************************
Running Quartus Prime Timing Analyzer
*******************************************************************
The Quartus Prime Shell supports all TCL commands in addition
to Quartus Prime Tcl commands. All unrecognized commands are
assumed to be external and are run using Tcl's "exec"
command.
- Type "exit" to exit.
- Type "help" to view a list of Quartus Prime Tcl packages.
- Type "help <package name>" to view a list of Tcl commands
available for the specified Quartus Prime Tcl package.
- Type "help -tcl" to get an overview on Quartus Prime Tcl usages.
*******************************************************************
project_open -force "synplify_synth_quartus_fit/Achilles_arria_X.qpf" -revision Achilles_arria_X
qsta_utility::auto_CRU "create_timing_netlist -snapshot final -model slow"
Automatically reading constraints and updating the timing netlist. To change this behavior, see Timer Analyzer Settings.
Parallel compilation is enabled and will use up to 8 processors
Loading final database.
Loading "final" snapshot for partition "root_partition".
Loading "final" snapshot for partition "auto_fab_0".
Successfully loaded final database: elapsed time is 00:00:12.
Core supply voltage operating condition is not set. Assuming a default value of '0.9V'.
Low junction temperature is 0 degrees C
High junction temperature is 100 degrees C
After that, reading the SDC and computing is faster:
The Timing Analyzer is analyzing 36 combinational loops as latches. For more details, run the Check Timing command in the Timing Analyzer or view the "User-Specified and Inferred Latches" table in the Synthesis report.
The Timing Analyzer found 73 latches that cannot be analyzed as synchronous elements. For more details, run the Check Timing command in the Timing Analyzer or view the "User-Specified and Inferred Latches" table in the Synthesis report.
Reading the HDL-embedded SDC files elapsed 00:00:00.
Reading SDC File: '../../sdc/Achilles_arria_X_project_quartus.sdc'
Reading SDC File: '../../sdc/fpga.fdc'
Clock uncertainty is not calculated until you update the timing netlist.
Reading SDC files elapsed 00:00:08.
...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
What kind of messages are you seeing in Quartus when this happens? And why does your SDC file have an extension of .fdc instead of .sdc? Perhaps you have not manually added the SDC files in the Quartus Timing Analyzer settings, so it's stuck, while in the Timing Analyzer GUI, you choose to manually read in the correct file(s).
#iwork4intel
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It is .fdc because I use the same file within synplify pro for synthesis.
the .fdc/.sdc are present in the .qsf, don't worry.
I added a couple of false path and for some reason that speeds up a lot the timing analyzer
(like 20x faster).
Maybe that when there are alot of negative timing slack,
the internal timing database of quartus is growing (RAM and/or Disk space) and exponentially slowing down?
I have also set up to 16 cores to be used, that speeds-up as well.
Linux job stats after i close the job:
CPU time : 43799.84 sec.
Max Memory : 19792 MB
Average Memory : 4198.15 MB
Total Requested Memory : 48000.00 MB
Delta Memory : 28208.00 MB
Max Processes : 21
Max Threads : 92
Run time : 76589 sec.
Turnaround time : 76591 sec.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
There is an interesting fitter report table that shows the added delay to meet hold timing:
*.fit.rpt
Estimated Delay Added for Hold Timing Details (Delay Added in ns)
Note: This table only shows the top 100 path(s) that have the largest delay added for hold.
Is there a way to extend the table to for example 5000 paths instead of 100?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Well, I was a bit too entousiastic, the timing analyzer still ran for 24 minutes.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Wow,
I archived the quartus 20.1 project under linux and extracted under win 10 with quartus 20.2,
the run time has dropped from 6 hours to 1h15m !
Either quartus 20.2 is way faster than quartus 20.1,
either linux implementation is much slower than windows.
I will install quartus 20.2 under Linux to see.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page