- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
using tbb22_004oss on Redhat64bit, dual Xeon X5550 processor.
We have an application which creates two pipelines to run concurrently. This is achieved by spawning a tbb_thread, then creating and calling the pipeline.run command, exiting when complete.
Pipeline 1 has 7 filters, ordered as:
serial_in_order
parallel
parallel
parallel
serial_in_order
parallel
serial_in_order
Pipeline 2 has 6 filters, of a similar scheduling system
flight tokens are set to 12 and 6 respectively (pipeline 1 takes substantially longer to process than 2)
In general, the application runs spectacularly well. It typically processes between 150-1500 'blobs' of data and exits cleanly - each time, every time.
However, during our release testing, I put 15,000 blobs through it and found than, although pipeline 2 exits first as normal, pipeline 1 now never exits. This happens each time, every time, with any data count over 9000 (might be a bit less - but 3000 works fine).
The way we're exiting the pipeline is for the first filter to return NULL when it runs out of data. I've stuck printf() all over the application and can confirm the first filter is correctly returning NULL, but the pipeline->run() command is never exiting.
Any ideas?
cheers,
Mat
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
using tbb22_004oss on Redhat64bit, dual Xeon X5550 processor.
We have an application which creates two pipelines to run concurrently. This is achieved by spawning a tbb_thread, then creating and calling the pipeline.run command, exiting when complete.
Pipeline 1 has 7 filters, ordered as:
serial_in_order
parallel
parallel
parallel
serial_in_order
parallel
serial_in_order
Pipeline 2 has 6 filters, of a similar scheduling system
flight tokens are set to 12 and 6 respectively (pipeline 1 takes substantially longer to process than 2)
In general, the application runs spectacularly well. It typically processes between 150-1500 'blobs' of data and exits cleanly - each time, every time.
However, during our release testing, I put 15,000 blobs through it and found than, although pipeline 2 exits first as normal, pipeline 1 now never exits. This happens each time, every time, with any data count over 9000 (might be a bit less - but 3000 works fine).
The way we're exiting the pipeline is for the first filter to return NULL when it runs out of data. I've stuck printf() all over the application and can confirm the first filter is correctly returning NULL, but the pipeline->run() command is never exiting.
Any ideas?
cheers,
Mat
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Anton,
Sincere apologies. After a good debugging session, I managed to locate the problem. The last pipeline stage was using a concurrent_bounded_queue to output data. Although the queue limit was set substantially high, it managed to reach this limit and stall the pipeline.
All corrected, the pipeline implementation is not at fault!
Thanks,
Mat

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page