I am preparing to build a workstation devoted purely to synthesis and implementation process. The big question is: What would be ideal machine for a single project processing?
I assume tough project parameters:
- that the project is big (over 600k LC)
- the utilisation for logic, FFs, BRAM/URAM/HBM and DSP is high ~80%
- frequency of operation is high (very little slack)
- the design is achievable (can be successfully P&R and timing is fine as well)
- the project is in HDL languages
- the machine has only this one task / project to run at a time
- the machine runs on Linux OS
- the latest Quartus version is run.
I know that a lot can be done by relaxing the parameters above, but for the sake of argument those are fixed - the project is tough but doable. No parameter tricks (like rapid recompile, etc.) are used. Just a pure brute force synthesis from zero to hero.
1) What would be the perfect machine to deliver the quickest results (from ready source code to bitstream)?
2) What kind of memory operations are the limiting factor for the synthesis tool to operate?
___a) memory latency?
___b) memory bandwidth?
___c) memory capacity - for sure it is a limiting factor, however, I can afford a lot of RAM (like 256GB), so this point is out of discussion.
3) What CPU is better?
___a) More slower cores?
___b) fewer, but more efficient?
4) Disk space
___a) NMVe? Maybe Raid?
___b) RAM disk?
5) Does network drive (NAS) affect the performance? In other words, is disk space access latency hidden?
6) Would switching to HLS modify above answers?
I have my gut feelings about that, but I would love to hear from you about your experiences. Having some indication is probably a better approach than just blindly spending my money.
I guess nobody did a full scale investigation, but the short question is what do you do in this field?
Have a great day.
synthesis tool such as Quartus is using several algorithms including optimization. That's why synthesis process have a lot operations. In my work (~480k L) I used 64GB RAM and it was good enough. However I would recommend start from 64GB and run compilation process containing 600k L to see memory utilization in your system. If it exceeds more than 48GB I would add more resources for ex. +32GB and check again.
Number of cores in CPU have impact on efficiency, under very important condition: optimization algorithms must be parallelized. Not every algorithm is feasible to be parallelized. Again, in my practice, even if I allowed to use all cores in CPU (i9 extreme with 32 logical cores) only 4 of them were utilized 100%, sometimes another 4 were utilized during some specific phase. So 8 were utilized during whole computing process. Because this is the system 12 logical cores should be enough.
I also observed that frequency had significant impact on compilation time.
Concluding, based on my experience I achieved the best time to build Quartus project if I used fewer cores (8) but more efficient CPU (cache, frequency, the newest architecture) with more RAM with reasonable time access.
NVMe memory or storage have less impact on compilation time than CPU/memory. Storage I/O transactions are limited during processing/optimization.