Intel® oneAPI Threading Building Blocks
Ask questions and share information about adding parallelism to your applications when using this threading library.
2465 Discussions

Class providing thread-safe access to a resource that must be acquired in a non-thread safe manner

Oren
Beginner
214 Views
Hi all,

I've attempted to write here a class that provides a thread-safe way to access a resource that must be acquired in a non-thread safe manner (e.g. through a socket, file or by some legacy code). This is my first attempt at such a thing, so any feedback and comments are appreciated.

Two things pop out: first, I'm making a temporary copy of "Item" in my GetItem() function, which might be a bad idea if it's large -- should GetItem take an Item& argument or will that cause unecessary dereferencing? Second, how does concurrent_queue memory usage scale if I want to allocate, say, ~128MB of buffer space in it*. Would a concurrent_vector be more efficient? I don't need random access, just the ability to pop one "Item" off the top in a thread-safe manner (and it doesn't matter which thread gets which "Item", so long as two threads never get the same one).

* In my application, Update() is quite a bit more efficient if done in large chunks rather than small bits, so putting ~128MB of Items in the concurrent_queue is almost certainly profitable unless that class has huge overhead.

Thanks for reading and commenting!

[cpp]// Include TBB Headers

class Item { ... };

typedef tbb::mutex mutex_t;
typedef mutex_x::scoped_lock lock_t;

class MyClass
{
	public:
		Item GetItem();

	private:
		tbb::concurrent_queue q;
		mutex_t mutex;
		void Update(); // NOT THREAD SAFE //
};

Item MyClass::GetItem()
{
	Item i;
	lock_t myLock;
	while (true)
	{
		if ( q.pop_if_present(i) ) {
			return i;
		} 
		else 
		{
			if ( myLock.try_acquire(mutex) )
			{
				Update();
			}
			else {
				Sleep(50);
			}
		}
	}

	return Item(); // Should never get here.
}
[/cpp]
0 Kudos
1 Reply
Wooyoung_K_Intel
Employee
214 Views

By looking at the code, I don't understand what it tries to do. So I will refrain from making any comments about the code itself.

As for memory usage in tbb concurrent queue, internally, concurrent_queue is implemented using 'micro_queues'.
Currently it uses 8 micro queues. Each micro queue allocates additional pages when needed (and return them when they are no longer needed).So, if you want to store 128M of items in the concurrent queue, it surely accomodates your need without too much of overhead. Concurrent queue makes private copies of input items in the allocated pages. So, if your individual item is large in bytes, copying them in/outwould tax you a sizable overhead. In that case, store pointers/references instead.
0 Kudos
Reply