Intel® oneAPI Threading Building Blocks
Ask questions and share information about adding parallelism to your applications when using this threading library.

How to consume an overwrite node?


I have a task graph who's body has a maximum concurrency of 1, similar to the pipeline described in How to make a pipeline with an Intel® Threading Building Blocks flow graph.

Similarly to the pipeline example, I want to automatically start another evaluation of the body of the pipeline if an input is already available as soon as the body ends. I want the input to my graph to be an overwrite_node, since that allows me to always keep the "freshest" input possible for the task graph to consume when it's ready. However, when the task graph reads from the overwrite_node, the contents of the overwrite_node are not invalidated. That means that the task graph's body might run with the same input multiple times, which is undesirable for me. I'd like for the overwrite_node's contents to be invalidated when they are passed to their successor, that way each input can only cause at most one evaluation of the graph. Is this possible?

Some background detail: I'm writing D3D12 commands and submitting them within task nodes in order to render a frame of animation. With the current design, I can't be submitting more than one frame of animation concurrently, since that would cause the order of submission of rendering commands from multiple frames to become quasi-randomly interlaced (= likely nonsense). Since the inputs to frames of rendering can be produced faster than the speed at which frames are rendered, I keep only the "freshest" input, and drop stale frames. I just don't want to render the same frame multiple times in a row, since the results are already displayed on the screen meaning that recomputing them would be wasteful.

My current workaround is to manually implement the buffering and limiting with custom multi-threaded code. It works, but it might be overly complicated, so I hope that figuring out this problem will simplify things.


0 Kudos
0 Replies