I'd like to achieve simple execution flow for my design/project, i.e. :
<1> first step to execute <map> step
<2> second step to load database from <1> and execute <fit> step for multiple seeds
So far I'm doing it in a bit different way, as a first step I'm creating dirs that are used as unique workspaces for each seed and simply replicate all project input files to these workspaces. Then both <map> and <fit> steps are executed for all workspaces separately.
From all the documentation files it is not clear for me how to improve my script to achieve aforementioned ideal flow. By now my script looks as follow:
project_open <XXX> -revision <XXX>
set_global_assignment -name SEED <XXX>
execute_module -tool syn -args "<SOME_MAP_ARGS>"
execute_module -tool fit -args "<SOME_FIT_ARGS>"
execute_module -tool fit -args "<ANOTHER_FIT_ARGS>"
Since all of my project/design files are in the same directory and all steps are invoked by single script everything works w/o any glitch. But I'm not sure how to decouple <syn> step from my script and make it as something that happens only once in my flow and then could be loaded somehow by another script responsible solely for <fit> step execution with specific SEED. Can You please point me to example explaining this use case ?
You may try to use quartus_dse with compilation mode Fitting and timing analysis or Fitting, timing analysis and assembler.
Execute quartus_dse --help to check the syntax.