Intel® oneAPI Threading Building Blocks
Ask questions and share information about adding parallelism to your applications when using this threading library.

std::future and tasks

nagy
New Contributor I
725 Views
I have an application where I need to be able to retreive the value of a std::future from inside a task. However, as blocking inside a task is a bad idea, I've run into a bit of a challenge.

What I would like to do is something along the lines of:

[cpp]while(!my_future.is_ready())
    tbb::task_scheduler::yield(); // If the calling context is inside an executing tbb task then run some other tasks.[/cpp]
or

[cpp]tbb::task_scheduler::oversubscribe(true); // If the calling context is inside an executing tbb task then insert another thread into task-scheduler.
my_future.wait();
tbb::task_scheduler::oversubscribe(false);[/cpp]
Is any of these or similar solutions possible with the current revision of tbb? Any suggestions in regards to work-arounds?
0 Kudos
6 Replies
Anton_M_Intel
Employee
725 Views
Not yet, however, we are looking into implementing something like in your 2nd code snippet.
Meanwhile probably, the only simple work-around is permanent oversubscribtion using tbb::task_scheduler_init.
If you feel like implementing more advanced synchronization, you may try to put the excessive workers into sleep by yourself enqueuing special sleepy tasks.
0 Kudos
RafSchietekat
Valued Contributor III
725 Views
"I have an application where I need to be able to retreive the value of a std::future from inside a task."
Why? External library? Difficult colleague? :-)

How about a user thread whose sole purpose it is to wait for the future and then decrement a reference count that you wait for in the task? A deferred status may indicate that getting the value won't block, but it provides only shallow information, so you may want to experiment either way on a case-by-case basis: probably it won't block, but the penalty (of undersubscription) is higher. (Obtain the thread from a pool and park it there after use.)

0 Kudos
nagy
New Contributor I
725 Views
Because of legacy code. I could change it to a callback based solution where the future calls a callback once it has received its value. However, such a change is rather large and affect a lot of code, therefore I would prefer to avoid it if possible.

Unfortunately I do not understand your solution suggestion. Would you care to give a bit more detailed explanation?
0 Kudos
RafSchietekat
Valued Contributor III
725 Views
The TBB task scheduler uses atomic reference counts to know when all chld tasks have finished executing, and you are able to manipulate those directly even without actual child tasks. Waiting for such an imaginary child task to finish executing is functionally equivalent to waiting for a condition variable, but allows the scheduler to decide to steal another task in the meantime, and so should be more efficient, or at least less inefficient (even within a program written entirely for TBB, you should try to avoid excessive stealing).

Instead of just executing the future::get() inside a task, you would delegate that to an ad-hoc thread taken from a pool. The handover would be the "condition variable".

The pool thread would not just execute the get() itself, though, it would wait for the status to become "ready" (nothing more to be done except use the value) or determine it as "deferred" (get() executes some code, like a function call). I now realise that either may have some blocking code, the former in the form of some essential member function to be executed anyway that involves another future or the like, but still more likely with the latter. If you encounter such recursive blocking, you could deal with it in the same manner as you encounter it, or you could decide to have that pool thread prepare the value first before the receiving task takes over; the former requires adapting all the futures at once in the same manner, the latter requires dealing with maybe fewer code but going deeper, so it's a trade-off.

In any case, mixing those paradigms/resolving the impedance mismatch should make you be able to kill quite some time...

Does that make sense, and do you agree with the approach (asking other readers at the same time)?
0 Kudos
Seunghwa_Kang
Beginner
726 Views

Anton Malakhov (Intel) wrote:

Not yet, however, we are looking into implementing something like in your 2nd code snippet.

Is there any update on this?

Following http://software.intel.com/en-us/forums/topic/393172, I need a task yield function (not thread yield) or a mechanism to wait for a non-TBB thread to finish work.

I am currently creating a dummy TBB task and waiting for the dummy task to finish (following Raf's comment in http://software.intel.com/en-us/forums/topic/393172), hoping this will cause the waiting task to yield (something like the code below). This at least helps to work around the stall issue in http://software.intel.com/en-us/forums/topic/393172 but looks not very pretty and whether this will work or not depends on implementation details. I wonder there is a better more standard way to do this.

I considered using a condition variable (supported by TBB, http://software.intel.com/en-us/blogs/2010/10/01/condition-variable-support-in-intel-threading-building-blocks), but it seems like waiting on a condition variable causes the thread to block (not a task to block).

void dummyTask( void ) {

retrun;

}

task_group dummyGroup;

while( true ) {

dummyGroup.run( []{ dummyTask(); } );

dummyGrouop.wait();

if( the non-TBB thread finished the work ) {

break;

}

}

0 Kudos
RafSchietekat
Valued Contributor III
726 Views

It won't do to wait for a dummy task, you have to make it a child of a continuation, otherwise the thread will still block, including all tasks buried below the current task. A function to temporarily oversubscribe could serve some purposes, but there's no magic to yield the stack while waiting for an asynchronous event. I think we're stuck with that for now.

Perhaps TBB could add a function to avoid the dummy child, because just calling decrement_ref_count() doesn't wake up the parent, but that's about it.

0 Kudos
Reply