Intel® oneAPI Threading Building Blocks
Ask questions and share information about adding parallelism to your applications when using this threading library.

Equivalent of std::deque at in tbb::concurrent_queue container?

sumanth_reddy_b_
Beginner
724 Views

0 down vote favorite

Is there a way to access concurrent_queue elements without popping, like 'at' function in std::deque. I am accessing the queue elements in different iterations. If i pop the elements in queue, then i have to fill the queue which will be overhead. please suggest me.

0 Kudos
10 Replies
RafSchietekat
Valued Contributor III
724 Views

Have a look at unsafe_begin(), unsafe_end().

0 Kudos
sumanth_reddy_b_
Beginner
724 Views

I think these are iterators which are not safe to use with threads. Not a concurrency safe method.

0 Kudos
RafSchietekat
Valued Contributor III
724 Views

If you don't synchronise with any other threads that have accessed the queue, I don't see how you could validly peek at its contents, whatever the operation looks like.

0 Kudos
sumanth_reddy_b_
Beginner
724 Views

vertext_found = true;

std::deque<T> dq;

while ( i < dq->size()){
                EnterCriticalSection(&h);
                if( i < dq.size() ) {
                    v = dq.at(i);      // accessing element of queue without popping
                    i++;
                    vertext_found = true;
                }
                LeaveCriticalSection(&h);
                if (vertext_found && (i < dq.size()) && v != NULL)
                {
                        **operation on 'v'
                        vertext_found = false;
                }
        }

This is how i am achieving coarse-grained multithreading with std::deque at operation using locks. How can i achieve fine-grained threading with tbb::concurrent_queue? For that somehow i need to access the queue elements without popping.

0 Kudos
sumanth_reddy_b_
Beginner
724 Views

I used at method to access elements of std::queue. I have synchronized with other threads by using critical section.

0 Kudos
RafSchietekat
Valued Contributor III
724 Views

I actually meant the concurrent queue (as well as the standard queue). Whether by location or iterator, such an operation would have to at least be able to assume non-interference by another consumer. Perhaps in such circumstances it might be possible to safely expose some of the current contents of the queue this way, but the use case seems too unusual to (forever) burden the implementation with it (possible future if not current performance penalty, maintainability), considering that you could just as well transfer the contents into a serial queue just once (easily done by a single consumer) and be assured infinitely repeatable read from there, so that's what I would advise instead.

0 Kudos
sumanth_reddy_b_
Beginner
724 Views

I am confused. Sorry i couldn't understand.. Did you check the code above? I got output by using above logic. So is it not possible to access the concurrent_queue elements without popping in a multithreaded environment? You said "Whether by location or iterator, such an operation would have to at least be able to assume non-interference by another consumer." .. there is no interference between two consumers in case of std::deque at operation since i am synchronizing that part using locks. Can you please let me know whether it is possible or not? why?

0 Kudos
RafSchietekat
Valued Contributor III
724 Views

Sorry, somehow I overlooked that piece of code, very strange. I suppose you can get away with calling size() outside of a critical section, but strictly speaking that's already a data race right there.

You can't reliably do the same with a concurrent_queue because the iterators are not guaranteed to remain valid when there's a concurrent push operation. But since you're into relying on unspecified behaviour anyway, you might just try your luck, if you'll also accept the linear access time with iterators that are "relatively slow". Be sure to check again for each new TBB version.

I wouldn't entirely rule out a change to accommodate what you want to do, but it seems unlikely.

Here's a new idea: a concurrent map. Accessing element i will still require logarithmic time, but at least everything is above board and lightweight.

0 Kudos
sumanth_reddy_b_
Beginner
724 Views

tbb::concurrent_queue<T> cq;

while ( ! cq.empty() ){
                if( cq.try_pop(V) ) {
                        **operation on 'v'
                }
        }

Will this work? And one more thing with std::deque implementation, you said i can get away with calling "while ( i < dq->size() )" outside critical section. How can i implement that?

0 Kudos
RafSchietekat
Valued Contributor III
724 Views

Just collapse the while and the if into "while (cq.try_pop(V))" and you have a perfectly valid piece of code.

There's still a difference between getting away with something and having a valid program. If you want to call std::deque::size() outside of a critical section, your computer probably won't blow up, and much of the time the program may do what you intend, but you'll also have to take responsibility for correctly dealing with false positives and negatives, and prepare for disapproval. If you implement the critical section with a spin mutex you probably won't gain anything substantial enough to warrant doing this, even if you might if you only had a kernel-level mutex. Translation: don't do this.

(Clarification 2013-03-12) The code shown in the previous posting is already valid, but collapsing it is an obvious improvement. If you use std::deque::size() outside a critical section, it is only probable that this will not result in a major malfunction, and you may be satisfied with such an arrangement, but it is in no way assured if for whatever reason the implementation does anything adventurous, perhaps in a version that you have not yet tested, because that's what undefined behaviour implies. If you are still unhappy with the performance of just a spin mutex, better have a look at aggregators before you resort to anything with supposedly benign races. And of course there's still my suggestion above to use a concurrent map.

0 Kudos
Reply