- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I thinkit makes sense to step back, and base on your use case which seems to be well described in the above quote.
You want to use bounded (and so blocking) queue as the interface between the pipeline-based producer and thread-based legacy consumer. It is expected that producer fills the queue faster than the consumer drains it; in this case, the pipeline should block and wait. On the other hand, there might be several simultaneous pipelines working at the same time, so blocking a TBB worker thread is undesirable. Ideally, the master thread that started the pipeline should be the one that blocks.
Unfortunately, the current TBB pipeline implementation has the problems of idle spinning here and there, in particular with thread-bound filters. Basically, the only way to avoid idle spinning in situations when the pipeline can neither proceed nor finish is to make the master thread do a blocking call, and let worker threads go asleep due to absence of available tasks.
Now what I can suggest is to recognize which thread executes the last pipeline stage that should push an iteminto the queue, and do different things depending on that. In a sense, it's like converting a non-thread-bound filter toact likethread-bound. To understand which thread runs the filter, you might use tbb_thread::id (see sections 12.2 and 12.3.1 in TBB 2.2 Reference Manual) - I hate to say this as we always argue for "thread-agnostic" parallel programming, but after all it's TBB issues that require such a workaround. If the master thread executes the filter, it uses the blocking push() method of the queue, so that it blocks when the queue is full. If the worker thread executes the fliter, it should use the non-blocking try_push() method. The question is, what to do when try_push fails. So far, there is no way to tell the pipeline that the filter failed to process the current token, and it should be re-attempted. So the solution I see is to use an intermediate queue in the filter for such items. As the lastfilter should be serial, std::queue would work there, without any additional locking.
In the pseudocode, the last filter I described looks like this:
[plain]if the filter is executed by the master // can block // push pending items first while intermediate queue is not empty pop an item from the top on intermediate queue push this item into the bounded output queue push the item received as the argument into the bounded output queue else // the filter executed by a worker; should not block // push pending items first while intermediate queue is not empty read an item from the top of intermediate queue (but not pop it) try_push this item into the bounded output queue if try_push succeeded pop from the top of intermediate queue else break the while loop // process the item received as the argument if intermediate queue is empty // i.e. no more pending items try_push the received item into the bounded output queue if try_push succeeded return push the item into intermediate queue return [/plain]
The non-blocking section might be simplified if first of all the received item is unconditionally pushed to the intermediate queue; then you can just process the queue in the loop. It's little bit suboptimal execution-wise, but the code will be simpler.
Link Copied
- « Previous
-
- 1
- 2
- Next »
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Unfortunately for performance, process_item() busy-waits.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes, as Raf said, process_item() busy-waits, and we actually need to fix it (I am sorry, the problem was just lost from our radars, so to say).
Meanwhile, the recommendation is to use try_process_item() (which returns one more type of value to indicate absense of work) and some external blocking mechanism, such as Windows event, or condition_variable, or semaphore.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The recommendation is to use it as std::condition_variable.
tbb::interface5 namespace is an implementation detail that can change over time (e.g. if the implementationis reworked in incompatible way it will go into tbb::interfaceX), while the interface represented by std::condition_variable is expected to be backward-compatible.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If there is other work to do while the pipeline is running, the call to methodpipeline::run can be replaced by a pair of calls pipeline::start_run andpipeline::finish_run, and the calling thread can do other work between the calls.The example in Section 3.9.7 has an example.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes, it's inaccurate. I apologize for that. In prototypes of thread-bound filter, we tried to provide the ability to bind it to the same thread that starts the pipeline; thisnote in Reference is an artifact of that effort. The methods were never provided as a supported feature.
If you do not want to start another thread for TBF but instead use the current one, I have a suggestion. The common idea is to fire a special task to start the pipeline, which will be taken and executed by a TBB worker thread. This way, the worker becomes blocked by pipeline::run(), while your thread can serve TBF or do something else. I can suggest two ways for implementing that:
- use class task_group. It is pretty convenient to fire a task for asyncronous execution, then do some other stuff, then wait for completion of the task. Its disadvantage is that if no worker thread is available (e.g. the program runs on a single core), the task may not get executed so the pipeline won't start at all.
- use new method task::enqueue() available in most recent stable releases. It is easy enough to start a task though not as convenient as task_group, and much less convenient to wait for task completion (no direct support to wait for enqueued tasks; if you need it you have to do it manually, either with an event/semaphore/condition variable, or with otherwise unnecessary parent task). Its advantage is that it guarantees execution by a worker thread, i.e. does not have the problem of the first approach.
The common shortcoming of both approaches, however, is that the worker thread that takes the task and starts the pipeline will busy-wait when the pipeline is empty but not finished.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I will think some more what can be done to solve your case, and get back with ideas later today, or tomorrow.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I thinkit makes sense to step back, and base on your use case which seems to be well described in the above quote.
You want to use bounded (and so blocking) queue as the interface between the pipeline-based producer and thread-based legacy consumer. It is expected that producer fills the queue faster than the consumer drains it; in this case, the pipeline should block and wait. On the other hand, there might be several simultaneous pipelines working at the same time, so blocking a TBB worker thread is undesirable. Ideally, the master thread that started the pipeline should be the one that blocks.
Unfortunately, the current TBB pipeline implementation has the problems of idle spinning here and there, in particular with thread-bound filters. Basically, the only way to avoid idle spinning in situations when the pipeline can neither proceed nor finish is to make the master thread do a blocking call, and let worker threads go asleep due to absence of available tasks.
Now what I can suggest is to recognize which thread executes the last pipeline stage that should push an iteminto the queue, and do different things depending on that. In a sense, it's like converting a non-thread-bound filter toact likethread-bound. To understand which thread runs the filter, you might use tbb_thread::id (see sections 12.2 and 12.3.1 in TBB 2.2 Reference Manual) - I hate to say this as we always argue for "thread-agnostic" parallel programming, but after all it's TBB issues that require such a workaround. If the master thread executes the filter, it uses the blocking push() method of the queue, so that it blocks when the queue is full. If the worker thread executes the fliter, it should use the non-blocking try_push() method. The question is, what to do when try_push fails. So far, there is no way to tell the pipeline that the filter failed to process the current token, and it should be re-attempted. So the solution I see is to use an intermediate queue in the filter for such items. As the lastfilter should be serial, std::queue would work there, without any additional locking.
In the pseudocode, the last filter I described looks like this:
[plain]if the filter is executed by the master // can block // push pending items first while intermediate queue is not empty pop an item from the top on intermediate queue push this item into the bounded output queue push the item received as the argument into the bounded output queue else // the filter executed by a worker; should not block // push pending items first while intermediate queue is not empty read an item from the top of intermediate queue (but not pop it) try_push this item into the bounded output queue if try_push succeeded pop from the top of intermediate queue else break the while loop // process the item received as the argument if intermediate queue is empty // i.e. no more pending items try_push the received item into the bounded output queue if try_push succeeded return push the item into intermediate queue return [/plain]
The non-blocking section might be simplified if first of all the received item is unconditionally pushed to the intermediate queue; then you can just process the queue in the loop. It's little bit suboptimal execution-wise, but the code will be simpler.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The task scheduler by default allocates one less SW thread than there are HW threads available, to account for the user thread that starts the work.
Yes, there should be special handling for the last items that can hoard in the intermediate queue; probably it can be done right after returning from pipeline::run(), or alternatively in the destructor of the filter.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- « Previous
-
- 1
- 2
- Next »