Intel® oneAPI Threading Building Blocks
Ask questions and share information about adding parallelism to your applications when using this threading library.

tbb::concurrent_lru_cache design

Rietschin__Axel
Beginner
1,146 Views

I'm using the concurrent_lru_cache and generally like it, but it appears that there is no possibility for the callback function to fail: operator[] allocates a new bucket and calls the function that produce the value if is_new_value_needed() returns true.

So far, so good, but what if the function fails? It cannot throw an exception as any other thread waiting on that cache slot will now spin forever.

The only option when the function fails is to return an invalid value (by convention) and have the users of the cache check for that value. In my case, I store pointers in the cache, so storing a nullptr when the function fails makes sense.

Still, when this occur, the cache slot is now polluted with that invalid value, and there is no way for the caller to retry as it will get the same value over and over (until more usage on the cache eventually causes that particular slot to be discarded but that's not the point).

I think one simple way to work around this would be to allow to forcibly discard a cache entry, something like .discard(k) that would free up the slot and allow retries.

An alternate design could allow the callback to throw exceptions that would bubble up to the thread(s) waiting on that cache slot (and leave the slot empty), or yet another design would use a container-like interface like .push_front(k, val) and bool .lookup(k, &val) and let the caller manage the item's creations entirely.

Additionally, a Boolean .hit() method on the handle_object, returning true on cache hits, would be useful, for example to produce hit ratio stats helping to understand the cache efficiency and pick an optimal cache size - this one is easy and I managed to implement it.

Thoughts? I see the LRU cache has been in preview for a long time, what are the plans for its evolution?

-- Axel

0 Kudos
2 Replies
Alexey-Kukanov
Employee
1,146 Views

Hi Axel,

honestly, concurrent_lru_cache was done as a quick minimally useful prototype for the purpose of getting feedback. But during all these years we've got very little interest to this container, and feedback is scarce. Thus, we do not spend time on improving it.

The ideas you suggest to overcome the issue sound good to me. A method to discard a value would likely be of wider use. Fixing the container to allow callbacks to throw (and generally, making it exception-safe) definitely makes sense, as well as some way to estimate hit/miss ratio.

But the truth is that without interest from, and better collaboration with, some users the container is likely to be deprecated and dropped. The best way to avoid this is to contribute patches for improvements you need.

Also as an alternative you might consider this implementation (TBB based): https://github.com/tstarling/thread-safe-lru

0 Kudos
Rietschin__Axel
Beginner
1,146 Views

OK so let me do the first step: I've attached a modified version of concurrent_lru_cache.h, with a new handle_object::hit() method, so we can now write something like:

auto item = lru[key];

if (item.hit())
{
    // We got a cache hit
}
else
{
    // The item was generated
}

That's a start :-)

The most useful for me at this point would be something like bool concurrent_lru_cache::discard(key); returning true if the key was found and discarded or false otherwise.

For what it's worth, I'm using the tbb::concurrent_lru_cache in a "stream over HTTP" component that gets many millions of uses per day as part of the most deployed enterprise online sharing and collaboration system, so it got a lot of love at least from us :-)

Axel



 

0 Kudos
Reply