Intel® oneAPI Threading Building Blocks
Ask questions and share information about adding parallelism to your applications when using this threading library.

pipeline not shutting down

memaher
Beginner
212 Views

using tbb22_004oss on Redhat64bit, dual Xeon X5550 processor.

We have an application which creates two pipelines to run concurrently. This is achieved by spawning a tbb_thread, then creating and calling the pipeline.run command, exiting when complete.

Pipeline 1 has 7 filters, ordered as:
serial_in_order
parallel
parallel
parallel
serial_in_order
parallel
serial_in_order

Pipeline 2 has 6 filters, of a similar scheduling system

flight tokens are set to 12 and 6 respectively (pipeline 1 takes substantially longer to process than 2)

In general, the application runs spectacularly well. It typically processes between 150-1500 'blobs' of data and exits cleanly - each time, every time.

However, during our release testing, I put 15,000 blobs through it and found than, although pipeline 2 exits first as normal, pipeline 1 now never exits. This happens each time, every time, with any data count over 9000 (might be a bit less - but 3000 works fine).

The way we're exiting the pipeline is for the first filter to return NULL when it runs out of data. I've stuck printf() all over the application and can confirm the first filter is correctly returning NULL, but the pipeline->run() command is never exiting.

Any ideas?

cheers,

Mat

0 Kudos
2 Replies
Anton_Pegushin
New Contributor II
212 Views
Quoting - memaher

using tbb22_004oss on Redhat64bit, dual Xeon X5550 processor.

We have an application which creates two pipelines to run concurrently. This is achieved by spawning a tbb_thread, then creating and calling the pipeline.run command, exiting when complete.

Pipeline 1 has 7 filters, ordered as:
serial_in_order
parallel
parallel
parallel
serial_in_order
parallel
serial_in_order

Pipeline 2 has 6 filters, of a similar scheduling system

flight tokens are set to 12 and 6 respectively (pipeline 1 takes substantially longer to process than 2)

In general, the application runs spectacularly well. It typically processes between 150-1500 'blobs' of data and exits cleanly - each time, every time.

However, during our release testing, I put 15,000 blobs through it and found than, although pipeline 2 exits first as normal, pipeline 1 now never exits. This happens each time, every time, with any data count over 9000 (might be a bit less - but 3000 works fine).

The way we're exiting the pipeline is for the first filter to return NULL when it runs out of data. I've stuck printf() all over the application and can confirm the first filter is correctly returning NULL, but the pipeline->run() command is never exiting.

Any ideas?

cheers,

Mat

Hi,

so where does debugger point you to, when you attach to this never-ending process? Are you using synchronization (mutexes) from inside your filters (for pipeline 2) and could they be causing a deadlock?
0 Kudos
memaher
Beginner
212 Views
Hi,

so where does debugger point you to, when you attach to this never-ending process? Are you using synchronization (mutexes) from inside your filters (for pipeline 2) and could they be causing a deadlock?

Anton,

Sincere apologies. After a good debugging session, I managed to locate the problem. The last pipeline stage was using a concurrent_bounded_queue to output data. Although the queue limit was set substantially high, it managed to reach this limit and stall the pipeline.

All corrected, the pipeline implementation is not at fault!

Thanks,

Mat
0 Kudos
Reply