Intel® oneAPI Threading Building Blocks
Ask questions and share information about adding parallelism to your applications when using this threading library.

Concurrent LRU cache

Ritwik_D_1
Beginner
957 Views

Once an item is accessed in the LRU cache is there a way to make it invalid for access by subsequent requests? The use case is like:

1. I want to access an element from the LRU cache.

2. Use the accessed element and while in use this should not be accessible to other threads.

3. Return the used item to the LRU cache which makes the item available for future requests.

Currently I am not sure how to do Step 2. It is not necessary that I use a LRU cache data structure, maybe there is some other data structure which can help me do this. Any help is appreciated.

0 Kudos
6 Replies
Alexei_K_Intel
Employee
957 Views

2. Use the accessed element and while in use this should not be accessible to other threads.

What do you mean with "not be accessible to other threads"? Are other threads blocked until the element is returned? Or a new instance of some object is created for another request?

Regards,
Alex

 

0 Kudos
Ritwik_D_1
Beginner
957 Views

No they are not blocked, other threads see it as "not found" and need to create new instance. When the original thread is done working with the element it will return the item back to the LRU cache and then this will be ready for use by other threads.

I guess what I am looking for is a concurrent_queue like data structure without any ordering. Something like concurrent-multi-hashmap/set.

0 Kudos
Alexei_K_Intel
Employee
957 Views

What about LRU cache of concurrent_queue's? I.e. we access the LRU cache and get a queue. If queue is empty, we create a new instance and then return the instance to the queue. Or is it too big overhead?

Regards,
Alex

0 Kudos
Ritwik_D_1
Beginner
957 Views

I have something similar: an array of concurrent_queues. Since queues have an inherent notion of ordering (something which I don't need) I was trying to avoid the use of queues (from the TBB guide: "A queue is inherently a bottle neck, because it must maintain first-in first-out order."). What are the push/pop complexities of concurrent_queue as compared to insertion and deletion from a unordered hash_map/lru cache?

0 Kudos
Alexei_K_Intel
Employee
957 Views

Yes, the concurrent queue potentially has a bottle neck issue. However, it depends on thread contention. If the access rate from multiple threads is high then the concurrent queue is not a good choice. But if the application accesses multiple queues and the access rare of a particular queue is low then this approach can have even less overheads than concurrent hash map. We can estimate the concurrent queue access as an atomic operation while concurrent hash map insertion takes several fine grained locks.

We can compare concurrent queue with a common mutex that is accessed for each operation and it can become a bottleneck. Concurrent hash map can be compared with an array of uncontended mutexes. I.e. while concurrent queue is more lightweight, it can be affected by contention. However, if you have multiple queues that are accessed randomly/uniformly the contention should not be a big deal.

Regards,
Alex

0 Kudos
Ritwik_D_1
Beginner
957 Views

Thanks for the information. That should help!

0 Kudos
Reply