Back in the days of hls compiler version 17.1, the "stream_in" class seemed to materialize into block mem fifos, while the later compiler versions tries to [also] utilize MLAB memory (Arria10).
Is there a way to specify what memory type to use for streams, alternatively some conditions that needs to be satisfied before going for block ram?
I can only find attributes for directing variables/arrays into specific memory type, like hls_memory_impl("type"), but this doesnt work for streams.
I assume that you are trying to use the ihc::buffer<> parameter with their design, please tell me if this is incorrect.
There is currently no way to force HLS to choose a different FIFO technology for the stream_in FIFO buffer, however you may create a FIFO using IP catalog and place that in front of your HLS component using Platform Designer
Thanks for looking into this. Im hoping this will arrive in a future version. Apparently there was a change after 17.1 version and I can't find the details on this.
After 17.1, I get non functional RTL when implementing a 608bit wide, 64 deep stream, wich seems to be implemented in MLABs only. My "area analysis" report lists a few of "Memory with unknown name (address space xx)" that seems to behave wrong.
|Memory with unknown name (address space 64)||0||0||0||32||0||Stall-free,\n76B requested,\n128B implemented.|
|Memory with unknown name (address space 65)||0||0||0||12||0||Stall-free,\n76B requested,\n128B implemented.|
|Memory with unknown name (address space 67)||0||0||0||26||0||Stall-free,\n76B requested,\n128B implemented.|
|Stream 'prq_prep_pullstream'||15||30||0||1||0||608b wide with 64 elements.|
Do you mind if you share your code here? Could be useful for analysis
If you are using the ihc::depth parameter with a stream, and accessing it using tryRead(), there is a known issue described in the link below: Intel High Level Synthesis Compiler Pro Edition: Best Practices Guide
Unfortunately I cannot share the code as is. I would have to write a test using the same stream, but that is not on my priority list right now. As long as 17.1 compiler can build this, I have a way out.
Note:This is not the simulation not working. It is the RTL that stops working. C simulation works fine. I have not simulated RTL in modelsim cause I dont have the right testbench for that type of simulating (I draw the results visually using windows gdi, not comparing numbers)..