Intel® Quartus® Prime Software
Intel® Quartus® Prime Design Software, Design Entry, Synthesis, Simulation, Verification, Timing Analysis, System Design (Platform Designer, formerly Qsys)
Announcements
Intel Support hours are Monday-Fridays, 8am-5pm PST, except Holidays. Thanks to our community members who provide support during our down time or before we get to your questions. We appreciate you!

Need Forum Guidance? Click here
Search our FPGA Knowledge Articles here.
15481 Discussions

Why does aoc compiler increase the channel depths from the once that are specified? Is there a way to fix the code to prevent that?

NSriv2
Novice
1,835 Views

Hi,

 

I have seen many cases in which I specify the channel depth as:

channel drain_struct _A_feeder_0_channel[_II_][_JJ_-1] __attribute__((depth(0))

 

 

But the compiler increases the channel depth to 1. Here is the message from the report:

 

Channel array with 150 elements. Each channel is implemented 512 bits wide by 1 deep. Requested depth was 0. Channel depth was changed for the following reason: instruction scheduling requirements See Best Practices Guide : Channels for more information

 

 

Can someone explain what instruction scheduling requirements result in changing the channel depths? And is there a way to fix this?

 

Thanks,

Nitish

0 Kudos
3 Replies
HRZ
Valued Contributor II
228 Views

There is no documentation/information regarding this behavior other than the message you see in the report. There is also no standard way to prevent the compiler from increasing the channel depth on its own. In your case, since the channel depth is being increased from 0 to 1, the area overhead will be negligible and there is no reason to be worried about this; the real problem arises when you set the channel depth to 1 and the compiler decides to increase it to 1024, wasting half the Block RAMs of the device to implement a bunch of channels… I reported one such case to Altera over one year and a half ago, they said they will try to improve the behavior in future but as you can see, nothing has changed.

 

In extreme cases, you can create a false cycle of channels in which case the compiler will then not optimize the channel depth anymore and just use the exact depth you specified in the kernel.

NSriv2
Novice
228 Views

Thanks HRZ. I have cases where channel depths are increased by more than 1 unit as well. I didn't understand your "false cycle" suggestion. Can you please elaborate on that?

HRZ
Valued Contributor II
228 Views

Let's say you have two kernels A and B. Channel A-B sends data from A to B. If you create a channel B-A from B to A whose input depends on the output of the A-B channel, this creates a cycle. In this case, the compiler will not *optimize* (i.e increase) the channel depth anymore. For cases where you have problem with forcing channel depth, you can create a false B-A channel to force channel depth for all channels in the cycle. Of course, you have to be very careful with channel ordering in this case or else you might encounter deadlocks.

 

Needless to say, since the compiler itself forces the user-specified depth in case a cycle of channels exists, it is clear that the functionality to force channel depth already exists in the compiler; it is just that Altera/Intel don’t want to expose this functionality to the users…

Reply