I am thought to use concurrent_bounded_queue to pass data between pipeline filters and control the number of products genarated. It seems to be a bad idea because the pipeline deadlocks but I am not sure why?
I would expect that a pop on the queue passed to the next filter wakes up the thread waiting on the push (I limit the capacity of the queue), but this never seems to happen.
The motivations for using the queue is that I pass around objects from a 3'th party api and they might be shared ptr's, secondly it allows me to rely on RAII instead of counting new and deletes.
I attach an example implementation consisting of 3 filter (input filter, transform filter and output filter).
Hope that it is only a problem with the execution of the idea to combine bounded_queue and pipeline.
If you have multiple threads trying to submit from the parallel middle stage to a queue that can hold only one element, those threads get blocked, and TBB may have no more threads to play with. You shouldn't second-guess the current mechanism to limit the number of data items in flight.
Just as an experiment, try to make the queue at least as long (?) as the number of threads in the system. But then, even if it works, do something else instead.
From my experience with pipelines, using concurrent_queues do not perform well. Maybe the implementation changed but last time I tried to optimize code using queues of shared resources, I ended up spinlocking way too much.
Also, as Raf said, you are not using pipelines the way you should. Objects are passed along the stages. No need to push/pop in queues between them. Just pass a pointer to it along and allocate and free in the correcponding stages.