Intel® oneAPI Threading Building Blocks
Ask questions and share information about adding parallelism to your applications when using this threading library.

While loop

bez
Beginner
662 Views
Hi
I'm new to TBB so this question might be trivial. Is there a possibility to parallize while loop that is working all the time?
Eg.
while(!finished)
{
if(item_exists)
{
//do something
}
//else do nothing

}
And we dont know how many items we will process, and what is next item or even if there is another item. In pthreads or OpenMP it's very easy to parallelize but here is a bit strange for me... Or maybe i don't understand it well enough.
Please help me :)
0 Kudos
13 Replies
RafSchietekat
Valued Contributor III
662 Views

Please consult the documentation (Tutorial and Reference Manual), and look for parallel_while/parallel_do.

(Added) If you want to block waiting for new items, though, use tbb_thread instead if you have other work at the same time, because TBB will not differentiate betweendoing useful workand being blocked.

0 Kudos
Alexey-Kukanov
Employee
662 Views
Quoting - Raf Schietekat

Please consult the documentation (Tutorial and Reference Manual), and look for parallel_while/parallel_do.


parallel_do is for finite loops, since it takes begin and end iterators - so it does not qualify for the case. parallel_while might suite better (though we declared it deprecated). pipeline is another alternative.
0 Kudos
bez
Beginner
662 Views
Thank you for quick repaly :) I was using older version of TBB tutorial (with parallel_while) , so i didn't see parallel_do, but still i think it doens't fit my problem. Pipeline isn't good for me too, i'm going to create my own task, maybe it's gonna work the way i want it ^^
0 Kudos
RafSchietekat
Valued Contributor III
662 Views
Can you confirm that this while loop controls all that the process is doing and is not supposed to run concurrently with other work? And that "else do nothing" really means "else wait"? If so, I would very much like to know what makes our suggestions unsuitable?
0 Kudos
bez
Beginner
662 Views
Quoting - Raf Schietekat
Can you confirm that this while loop controls all that the process is doing and is not supposed to run concurrently with other work? And that "else do nothing" really means "else wait"? If so, I would very much like to know what makes our suggestions unsuitable?

This loop in my case needed to work concurrent with other tasks. So i just inserted whis loop inside execute() method in class that inherites from tbb::task. It's now working correctly :)
And about you suggestions as far as i know parallel_do could work but i needed this loop to work all the time, even when there are no more items in container, and parallel_do ends its execution after processing all elements in container.
In parallel_while you need to sepecify next item "method, pop_if_present, [...] sets to the next iteration value if there is one and returns true." And i don't know if there is another item and can't set to next iteration value.
Pipeline was good suggestion but i think that easier is to create task.

And i've got question about task_scheduler_init::automatic. I read that i should omit parameter in task_scheduler_init coz "It is best to leave the decision of how many threads to use to the task scheduler.", but when i do that my program runs sequentially... am i doing something wrong? ^^
0 Kudos
RafSchietekat
Valued Contributor III
662 Views
"This loop in my case needed to work concurrent with other tasks. So i just inserted whis loop inside execute() method in class that inherites from tbb::task. It's now working correctly :)"
If you mean concurrent like a daemon and/or with blocking, forget about it (use tbb_thread instead), otherwise a task_scheduler_init in the thread should also do the trick.

"And about you suggestions as far as i know parallel_do could work but i needed this loop to work all the time, even when there are no more items in container, and parallel_do ends its execution after processing all elements in container.
In parallel_while you need to sepecify next item "method, pop_if_present, [...] sets to the next iteration value if there is one and returns true." And i don't know if there is another item and can't set to next iteration value.
Pipeline was good suggestion but i think that easier is to create task."
That depends on exactly what you're doing, which you haven't told us.

"And i've got question about task_scheduler_init::automatic. I read that i should omit parameter in task_scheduler_init coz "It is best to leave the decision of how many threads to use to the task scheduler.", but when i do that my program runs sequentially... am i doing something wrong? ^^"
Probably. Do you (also) have a long-lived task_scheduler_init (you should), e.g., in main()?
0 Kudos
bez
Beginner
662 Views
Quoting - Raf Schietekat
Probably. Do you (also) have a long-lived task_scheduler_init (you should), e.g., in main()?
Yes i create tbb:task_scheduler_init init in main,
then i allocate tasks - one empyt taks and then few childern of empty task, spwan them and wait for all.
But i got few tasks that works like deamon threds and maybe becaue of this task_scheduler works strange??


0 Kudos
RafSchietekat
Valued Contributor III
662 Views
It seems like you might be requiring concurrency, but TBB thinks that's too frivolous (it's only interested in hard-core performance, for which it will certainly exploit optional concurrency to its advantage), so it won't play along with you (it's very unfair that way, and it snubs preemptive scheduling). Basically your program should be designed to also run on a single thread (you can give task_scheduler_init an argument 1 to simulate that). Otherwise, as I've written earlier, for the parts that require concurrency, use tbb_thread instead.
0 Kudos
bez
Beginner
662 Views
Quoting - Raf Schietekat
It seems like you might be requiring concurrency, but TBB thinks that's too frivolous (it's only interested in hard-core performance, for which it will certainly exploit optional concurrency to its advantage), so it won't play along with you (it's very unfair that way, and it snubs preemptive scheduling). Basically your program should be designed to also run on a single thread (you can give task_scheduler_init an argument 1 to simulate that). Otherwise, as I've written earlier, for the parts that require concurrency, use tbb_thread instead.
Ok thank you for help :)
i'm now rewriting my program changing to tbb_thread. I'll see how it works.
0 Kudos
jimdempseyatthecove
Honored Contributor III
662 Views


bez,

It would seem like you are thinking in terms of theads instead of tasks. What do you wish to happen when (!item_exists) && (!finished)? The "//else do nothing" implies burn CPU cycles waiting for another item_exists.

IOW is finished set external to the control loop by another thread? If so, how do you wish to relieve senseless CPU cycles? SwitchToThread(), Sleep(0), _mm_pause(), WaitForSingleEvent...? The particular technique would depend on the current portion of time spent waiting for item_exists verses do something and the acceptible latency between making the first item_exists and the do something.

Do the item_exists come in randomely, in batches, continuously, by one producer, by many producers? And is the thread enqueuing time significant compared to your do something? Is there a requirement for the item_exists to be processed in the order in which they came in to existence? How much fairness is required (or unfairness tolerated)?

There is no single solution that fits best for all situations.

Jim Dempsey

0 Kudos
bez
Beginner
662 Views


bez,

It would seem like you are thinking in terms of theads instead of tasks. What do you wish to happen when (!item_exists) && (!finished)? The "//else do nothing" implies burn CPU cycles waiting for another item_exists.

IOW is finished set external to the control loop by another thread? If so, how do you wish to relieve senseless CPU cycles? SwitchToThread(), Sleep(0), _mm_pause(), WaitForSingleEvent...? The particular technique would depend on the current portion of time spent waiting for item_exists verses do something and the acceptible latency between making the first item_exists and the do something.

Do the item_exists come in randomely, in batches, continuously, by one producer, by many producers? And is the thread enqueuing time significant compared to your do something? Is there a requirement for the item_exists to be processed in the order in which they came in to existence? How much fairness is required (or unfairness tolerated)?

There is no single solution that fits best for all situations.

Jim Dempsey

The code that i posted was just a part (main idea).
Yes i was thinking in terms of threads because i've been writing programs based on threads till now.
This problem that i posted was my idea of producer/consumer( many producers and many consumers ). items are comming one by one, they dont have to be processed in any order ( random order is ok ^^ ). I wanted to provide some fairness to threads so i used queuing_mutex.
I cannot think another way of doing it that using while loop. I also couldnt find another way to reduce cpu cycles (in posix i could use pthread_cond to make thread wait) so im now doing it like this:


class Consumer
{
public:
void operator() ()
{
double item;
int id;
while(finished == false)
{
while(finished == false && empty == false)
{
QMutex::scoped_lock mylock(MyQMutex); //queuing_mutex
//id = tbb::tbb_thread::id get_id();
if(buffer.size() > 0) //buffer - concurrent_queue
{

item = remove_item();
process_item(item, id);
full = false;
}else{
empty = true;
}
}
}

}

};

and just run this like a thread. Producer class is similar. It works quite good ^^.
I got a problem getting id of thread though (the commented line).
And when i want to wait for other thread ( tbb::tbb_thread join(my_thread); ) application finish working immediately.
I really wanted to use concurrent queue form tbb to see how faster it would go but i use that mutex (coz i need it to protect "if(buffer.size() > 0)" ) and as a result im locking all operations including popping items from queue so its pointless (i could use normal stl container).

What do you think about it?
0 Kudos
jimdempseyatthecove
Honored Contributor III
662 Views
bez,

Concurrent queue might work out for you, but you are still thinking in terms of threads not tasks.

Using an alternate threading tool (QuickThread, Intel Webinar scheduled June 4th) tasks queues are relatively easy to do:

[cpp]void ConsumerTask(Object_t* Object)
{
  Object->DoSomething();
  DoSomethingElse(*Object); //  by reference
AnotherThing(Object); // by pointer ... // optional delete Object; } ... // production point in your code // optional (or use persistant object) Object_t* Object = new Object_t; // build object // ... // enqueue Object into consumer task parallel_task(ConsumerTask, Object); ... // continue producing objects or doing other work [/cpp]
When the task is (has) I/O statements you can use

parallel_task(IO$, ConsumerTask, Object);

There are other nifty things you can do with it

// to NUMA node with most available threads
parallel_task(Waiting_M0$, ConsumerTask, Object);


Jim Dempsey



0 Kudos
bez
Beginner
662 Views
bez,

Concurrent queue might work out for you, but you are still thinking in terms of threads not tasks.

Using an alternate threading tool (QuickThread, Intel Webinar scheduled June 4th) tasks queues are relatively easy to do:

[cpp]void ConsumerTask(Object_t* Object)
{
Object->DoSomething();
DoSomethingElse(*Object); // by reference
AnotherThing(Object); // by pointer
...
// optional
delete Object;
}

...
// production point in your code
// optional (or use persistant object)
Object_t* Object = new Object_t;

// build object
// ...
// enqueue Object into consumer task
parallel_task(ConsumerTask, Object);
... // continue producing objects or doing other work
[/cpp]
When the task is (has) I/O statements you can use

parallel_task(IO$, ConsumerTask, Object);

There are other nifty things you can do with it

// to NUMA node with most available threads
parallel_task(Waiting_M0$, ConsumerTask, Object);


Jim Dempsey



Ok thank You. I am going to look closer to this ^^
0 Kudos
Reply