- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'm new to TBB so this question might be trivial. Is there a possibility to parallize while loop that is working all the time?
Eg.
while(!finished)
{
if(item_exists)
{
//do something
}
//else do nothing
}
And we dont know how many items we will process, and what is next item or even if there is another item. In pthreads or OpenMP it's very easy to parallelize but here is a bit strange for me... Or maybe i don't understand it well enough.
Please help me :)
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Please consult the documentation (Tutorial and Reference Manual), and look for parallel_while/parallel_do.
(Added) If you want to block waiting for new items, though, use tbb_thread instead if you have other work at the same time, because TBB will not differentiate betweendoing useful workand being blocked.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Please consult the documentation (Tutorial and Reference Manual), and look for parallel_while/parallel_do.
parallel_do is for finite loops, since it takes begin and end iterators - so it does not qualify for the case. parallel_while might suite better (though we declared it deprecated). pipeline is another alternative.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This loop in my case needed to work concurrent with other tasks. So i just inserted whis loop inside execute() method in class that inherites from tbb::task. It's now working correctly :)
And about you suggestions as far as i know parallel_do could work but i needed this loop to work all the time, even when there are no more items in container, and parallel_do ends its execution after processing all elements in container.
In parallel_while you need to sepecify next item "method, pop_if_present, [...] sets to the next iteration value if there is one and returns true." And i don't know if there is another item and can't set to next iteration value.
Pipeline was good suggestion but i think that easier is to create task.
And i've got question about task_scheduler_init::automatic. I read that i should omit parameter in task_scheduler_init coz "It is best to leave the decision of how many threads to use to the task scheduler.", but when i do that my program runs sequentially... am i doing something wrong? ^^
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If you mean concurrent like a daemon and/or with blocking, forget about it (use tbb_thread instead), otherwise a task_scheduler_init in the thread should also do the trick.
"And about you suggestions as far as i know parallel_do could work but i needed this loop to work all the time, even when there are no more items in container, and parallel_do ends its execution after processing all elements in container.
In parallel_while you need to sepecify next item "method, pop_if_present, [...] sets to the next iteration value if there is one and returns true." And i don't know if there is another item and can't set to next iteration value.
Pipeline was good suggestion but i think that easier is to create task."
That depends on exactly what you're doing, which you haven't told us.
"And i've got question about task_scheduler_init::automatic. I read that i should omit parameter in task_scheduler_init coz "It is best to leave the decision of how many threads to use to the task scheduler.", but when i do that my program runs sequentially... am i doing something wrong? ^^"
Probably. Do you (also) have a long-lived task_scheduler_init (you should), e.g., in main()?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
then i allocate tasks - one empyt taks and then few childern of empty task, spwan them and wait for all.
But i got few tasks that works like deamon threds and maybe becaue of this task_scheduler works strange??
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
i'm now rewriting my program changing to tbb_thread. I'll see how it works.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
bez,
It would seem like you are thinking in terms of theads instead of tasks. What do you wish to happen when (!item_exists) && (!finished)? The "//else do nothing" implies burn CPU cycles waiting for another item_exists.
IOW is finished set external to the control loop by another thread? If so, how do you wish to relieve senseless CPU cycles? SwitchToThread(), Sleep(0), _mm_pause(), WaitForSingleEvent...? The particular technique would depend on the current portion of time spent waiting for item_exists verses do something and the acceptible latency between making the first item_exists and the do something.
Do the item_exists come in randomely, in batches, continuously, by one producer, by many producers? And is the thread enqueuing time significant compared to your do something? Is there a requirement for the item_exists to be processed in the order in which they came in to existence? How much fairness is required (or unfairness tolerated)?
There is no single solution that fits best for all situations.
Jim Dempsey
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
bez,
It would seem like you are thinking in terms of theads instead of tasks. What do you wish to happen when (!item_exists) && (!finished)? The "//else do nothing" implies burn CPU cycles waiting for another item_exists.
IOW is finished set external to the control loop by another thread? If so, how do you wish to relieve senseless CPU cycles? SwitchToThread(), Sleep(0), _mm_pause(), WaitForSingleEvent...? The particular technique would depend on the current portion of time spent waiting for item_exists verses do something and the acceptible latency between making the first item_exists and the do something.
Do the item_exists come in randomely, in batches, continuously, by one producer, by many producers? And is the thread enqueuing time significant compared to your do something? Is there a requirement for the item_exists to be processed in the order in which they came in to existence? How much fairness is required (or unfairness tolerated)?
There is no single solution that fits best for all situations.
Jim Dempsey
Yes i was thinking in terms of threads because i've been writing programs based on threads till now.
This problem that i posted was my idea of producer/consumer( many producers and many consumers ). items are comming one by one, they dont have to be processed in any order ( random order is ok ^^ ). I wanted to provide some fairness to threads so i used queuing_mutex.
I cannot think another way of doing it that using while loop. I also couldnt find another way to reduce cpu cycles (in posix i could use pthread_cond to make thread wait) so im now doing it like this:
class Consumer
{
public:
void operator() ()
{
double item;
int id;
while(finished == false)
{
while(finished == false && empty == false)
{
QMutex::scoped_lock mylock(MyQMutex); //queuing_mutex
//id = tbb::tbb_thread::id get_id();
if(buffer.size() > 0) //buffer - concurrent_queue
{
item = remove_item();
process_item(item, id);
full = false;
}else{
empty = true;
}
}
}
}
};
and just run this like a thread. Producer class is similar. It works quite good ^^.
I got a problem getting id of thread though (the commented line).
And when i want to wait for other thread ( tbb::tbb_thread join(my_thread); ) application finish working immediately.
I really wanted to use concurrent queue form tbb to see how faster it would go but i use that mutex (coz i need it to protect "if(buffer.size() > 0)" ) and as a result im locking all operations including popping items from queue so its pointless (i could use normal stl container).
What do you think about it?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Concurrent queue might work out for you, but you are still thinking in terms of threads not tasks.
Using an alternate threading tool (QuickThread, Intel Webinar scheduled June 4th) tasks queues are relatively easy to do:
[cpp]void ConsumerTask(Object_t* Object) { Object->DoSomething(); DoSomethingElse(*Object); // by referenceWhen the task is (has) I/O statements you can use
AnotherThing(Object); // by pointer ... // optional delete Object; } ... // production point in your code // optional (or use persistant object) Object_t* Object = new Object_t; // build object // ... // enqueue Object into consumer task parallel_task(ConsumerTask, Object); ... // continue producing objects or doing other work [/cpp]
parallel_task(IO$, ConsumerTask, Object);
There are other nifty things you can do with it
// to NUMA node with most available threads
parallel_task(Waiting_M0$, ConsumerTask, Object);
Jim Dempsey
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Concurrent queue might work out for you, but you are still thinking in terms of threads not tasks.
Using an alternate threading tool (QuickThread, Intel Webinar scheduled June 4th) tasks queues are relatively easy to do:
[cpp]void ConsumerTask(Object_t* Object)When the task is (has) I/O statements you can use
{
Object->DoSomething();
DoSomethingElse(*Object); // by reference
AnotherThing(Object); // by pointer
...
// optional
delete Object;
}
...
// production point in your code
// optional (or use persistant object)
Object_t* Object = new Object_t;
// build object
// ...
// enqueue Object into consumer task
parallel_task(ConsumerTask, Object);
... // continue producing objects or doing other work
[/cpp]
parallel_task(IO$, ConsumerTask, Object);
There are other nifty things you can do with it
// to NUMA node with most available threads
parallel_task(Waiting_M0$, ConsumerTask, Object);
Jim Dempsey

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page