Community
cancel
Showing results for 
Search instead for 
Did you mean: 
90 Views

Need Suggestion

Hi ..

I am implementing enqueue and dequeue operations concurrently on a linked list.What i need to do is if queue is empty,then dequeue is thread has to wait for enqueue thread to enqueue element.

I am attaching my code here.

My lock looks like:

typedef spin_mutex Lock;
Lock qsmtx;
Lock::scoped_lock lock( qsmtx );

and same lock qsmtx is used in enuqueue and dequeue.So it is becoming like serial execution.

If i write condetion --> while(front==NULL){ }
[waiting loop ] out side of the lock then one processor while checking front another tries to change that front.I am unable to find any alternative to this please give ur suggestions to escape from this error.

I need to do this using mutexes with out using concurrent_queue().
0 Kudos
6 Replies
Dmitry_Vyukov
Valued Contributor I
90 Views

You need to use condition variable for such a situation.
The general pattern is as follows.

producer:
mutex.lock();
bool was_empty = enqueue_impl(data);
mutex.unlock();
if (was_empty)
condvar.broadcast();

consumer:
mutex.lock();
while (is_empty())
condvar.wait(mutex);
data = dequeue_impl();
mutex.unlock();
return data;

The main point is that condvar.wait() method unlocks the mutex first and then blocks the thread. So it provides "optimized waiting", sort of.

90 Views

Quoting - Dmitriy Vyukov
You need to use condition variable for such a situation.
The general pattern is as follows.

producer:
mutex.lock();
bool was_empty = enqueue_impl(data);
mutex.unlock();
if (was_empty)
condvar.broadcast();

consumer:
mutex.lock();
while (is_empty())
condvar.wait(mutex);
data = dequeue_impl();
mutex.unlock();
return data;

The main point is that condvar.wait() method unlocks the mutex first and then blocks the thread. So it provides "optimized waiting", sort of.

how can we unlock the spin_mutex and queuing_mutex's??how to keep a thread wait..? is wait is there in TBB??
Shankar1
Beginner
90 Views

how can we unlock the spin_mutex and queuing_mutex's?? how to keep a thread wait..? is wait is there in TBB??

The scoped lock that use over spin_mutex or queuing_mutex takes care of unlocking the mutex when it( i.e, the scoped_lock) goes out of scope. you could also use .release() or .unlock() ( one of those which is exposed ) on the scoped lock to explicitly unlock it.

And to make a thread wait, you have to use a condition variable which you can find in the Boost library.

TBB has Gate class but condition variable of boost is an easier solution for your case.
90 Views

Quoting - Shankar

The scoped lock that use over spin_mutex or queuing_mutex takes care of unlocking the mutex when it( i.e, the scoped_lock) goes out of scope. you could also use .release() or .unlock() ( one of those which is exposed ) on the scoped lock to explicitly unlock it.

And to make a thread wait, you have to use a condition variable which you can find in the Boost library.

TBB has Gate class but condition variable of boost is an easier solution for your case.
I have to find a solution for this problem in TBB only.It is our course in TBB.
Shankar1
Beginner
90 Views

I have to find a solution for this problem in TBB only.It is our course in TBB.

Then if you need to do this using tbb and if you dont care about spin waiting when the dequeue thread doesnt find any items then what you could do is use atomic_backoff class in tbb and do backoffs and check for items to be available. This way you dont need spin mutex or queueing mutex at all.

jimdempseyatthecove
Black Belt
90 Views


Praveen,

In looking at your header file, which I assume is a cut down piece of sample code, I notice that your dequeue returns a bool:true if (node extracted and deleted) or false if (node not present). The node (when a node was extracted) value is not used or returned. This leads me to conclude that your program uses the node at the head of the list, then when you are done with the node, you then delete the node at the head of the list.

This leads me to assume that your code only has one consumer thread, and that the dequeue will never be called when the header record is null.

It is unclear from the code as to if you have multiple producers, I will have to assume that you do. In any event, from your original post it seems like you have a single consumer.

A multi-producer single-consumer FIFO linked list can have some optimizations relating to extraction.

You can test the list for empty and return empty indication without locking the list.
When the list is NOT empty (test before lock) you can further test to see if the head != tail without lock. When head != tail, you can remove head without lock.

The only time you need to lock the list is when you delete the only node in the list.

Note, there is a further optimization whereby when you reach the point of removing the last node you remember the node address but leave this node at the head of the list. This means your test for empty list needs to test for NULL or == empty node.

What this means, for mpsc FIFO linked list dequeue is completely lock free.

Well your code isnot quite lock free. In your code, you are using "delete", which has locks unless it is overloaded with a scalable allocator. The usual work-around for this is to have a pool of free nodes managed by the enqueue/dequeue. The pool is used to reduce the number of new/delete operations. This is essentially what the scalable allocater does. Your private pool can be optimized as a Single Producer, Multi-Consumer(spmc) linked list (which has its own set of optimizing techniques).

I suggest you look at incorporating spmc and mpsc techniques for managing your queue. This will reduce the number of locks at least by half and will be well worth any additional programming effort.

Jim Dempsey
Reply