Intel® oneAPI Threading Building Blocks
Ask questions and share information about adding parallelism to your applications when using this threading library.

how to setup an Interruptible parallel_do ?

ypoissant
Beginner
435 Views
We have started using TBB in our plugin to get a feeling of it. I must say that so far, we get very nice results and it is relatively easy to use.

Now I have a slightly more complex loop to parallelize than what we tackled first and I'd like to get advises on how to implement that with TBB.

The renderer must be interruptible. When the user moves the camera, we stop the current render and start anew.

parallel_for does not seem suitable for that because it needs the whole range and splits it down. Or is it possible to opt out of a parallel_for loop and if yes, which concept do I need to look for in the documentation to accomplish this?

It seems to me that our best bet is to use a parallel_do but I'm just too newbee with TBB to figure how to do that. Can anyone shed light on how to proceed with seting up a parallel_do that would be interruptible?

Yves
0 Kudos
8 Replies
RafSchietekat
Valued Contributor III
435 Views
Have a look at the Tutorial under "Cancellation Without An Exception".
0 Kudos
ypoissant
Beginner
435 Views
Thanks a lot Raf. That's a nice solution.

I had read the tutorial completely but just too much information to integrate in too short period of time.

Yves
0 Kudos
jimdempseyatthecove
Honored Contributor III
435 Views
Raf,

Could the task simply monitor a cancelation request flag and then simply advance its iterator to the end using operator=()? It seems rather cumbersome to keep track of the tasks spawned then issue cancel requests.

I QuickThread you would simply monitor a flag and issue break or return

[bash]bailOutOfRender = false;
parallel_for(fnRenderObj, 0, nObjs);
...
void fnRenderObjs(size_t iBeginObj, size_t iEndObj)
{
   for(size_t iObj = iBeginObj; iObj < iEndObj; ++iObj)
   {
      if(bailOutOfRender)
        break;
      
      Objs[iObj].Render();
   }
}

...
void Object::Render()
{
      if(bailOutOfRender)
        return;
   parallel_for(fnRenderComponents, 0, nComponents);
}
...
void fnRenderComponents(size_t iBeginComponent, size_t iEndComponent)
{
   for(size_t iComponent = iBeginComponent; iComponent < iEndComponent; ++iComponent)
   {
      if(bailOutOfRender)
        break;
      
      Components[iComponent].Render();
   }
}
...
void Component::Render()
{
      for(iPart = 0; iPart < nParts; ++iPart)
      {
         if(bailOutOfRender)
           return;
         ...
      }
}[/bash]

Jim Dempsey
0 Kudos
RafSchietekat
Valued Contributor III
435 Views

"Could the task simply monitor a cancelation request flag and then simply advance its iterator to the end using operator=()? It seems rather cumbersome to keep track of the tasks spawned then issue cancel requests."
It seems somewhat similarto "Bubbles in pipeline": a note to self will eventually get you to the exit (do nothing, do nothing, do nothing, ...), but why not also inform thealgorithm (task group)itselfto stop creating new tasks that you won't act on anyway, and bail out faster as a result?

0 Kudos
jimdempseyatthecove
Honored Contributor III
435 Views
This is not the same as "Bubbles in pipeline" since the tasks (slices of parallel_for) have already been spawned and will not get spawned again (until next frame). Bailing out of the for loop early (e.g. setting the itr to end()) should do the trick. In the event of nested parallel_xxx programming structure inserting the bail out test prior to start of parallel_xxx will avoid a spurrious (and unnecessary)start stop of the parallel_for slice.

In the case of the parallel_pipeline, the bubbles (tokens) must get returned to the free token pool and routed back into the front of the pipeline (as new task).

The parallel_for has no token (buffer) thatcirculates (and spawns task at front of pipeline).

Jim
0 Kudos
jimdempseyatthecove
Honored Contributor III
435 Views
I might add that canceling a task(group) has itspluses and minuses.

On the plus side, you do not have to insert code down the (nested) task group(s).

On the minus side, you cannot elect to place your bailouts at safe places in yourcode. Which may mean you have to additional defensive code to handle termination of task(s) at arbitrary locations.

The generic tbb:iterator can be modified (if not already) to hold and link to abail out indicator. In this way the bail out points will be before (or just at) the beginning of the scope of the parallel construct. Thus permitting bailout prior to ctors of objects within the task body. If need be you can add an additional set of {}'s around sensitive code.

Jim Dempsey

0 Kudos
Alexey-Kukanov
Employee
435 Views
Jim, unless you are talking in general and irrelevant to TBB, I'm afraid you misunderstand cancellation in TBB.
First, tasks are not cancelled in the middle of execution, so "defensive code to handle termination at arbitary locations" does not apply.
Second, tasks might check cancellation state during execution, and opt for early completion.
Third, if some tasks were spawned but not yet started execution, and the whole parallel job was cancelled, these tasks won't be executed.

The blog of Andrey Marochkohas articles that explain cancellation.
0 Kudos
RafSchietekat
Valued Contributor III
435 Views
#5 "This is not the same as "Bubbles in pipeline" since the tasks (slices of parallel_for) have already been spawned and will not get spawned again (until next frame)."
I only called it "something similar", because it also tries to avoid wasted work by the toolkit itself on the way out.

#5 "Bailing out of the for loop early (e.g. setting the itr to end()) should do the trick. In the event of nested parallel_xxx programming structure inserting the bail out test prior to start of parallel_xxx will avoid a spurrious (and unnecessary) start stop of the parallel_for slice."
I don't quite see what you mean here. To emulate task group cancellation with parallel_for you would need a custom range object that stops subdividing at bailout time, otherwise the same number of chunks would still be generated, each executed in its own task. It doesn't matter much for parallel_for with auto_partitioner, which wouldn't generate that many chunks anyway, but, unless there are drawbacks, why not have a more general cancellation mechanism as part of the toolkit?

#6 "On the minus side, you cannot elect to place your bailouts at safe places in your code. Which may mean you have to additional defensive code to handle termination of task(s) at arbitrary locations."
The cancellation happens before task execution starts, or when it is determined and acted on by your own code by examining is_cancelled(), so the tasks are not asynchronously terminated.
0 Kudos
Reply