Intel® oneAPI Threading Building Blocks
Ask questions and share information about adding parallelism to your applications when using this threading library.

concurrent_hash_map::erase

Intel_C_8
Beginner
1,755 Views

Does anyone have a good example of using concurrent_hash_map::erase()?

I have been trying to get this working for the last week, and it appears that it will let me erase a key twice (returns tru in both threads), even though the documents indicate otherwise. Also how are you supposed to when to free the contents of the hash element?

For instance:

tbb::concurrent_hash_map my_map;

some threads:

tbb::concurrent_hash_map::accessor acc_w;

if(my_map.insert(acc_w, new_Key)) {

acc_w->second = new Item();

}

some more threads:

if(my_map.erase(acc_w, some_Key)) {

// get here multiple times for same Key

// don't know what pointer is to free it - could do a find and stash pointer, but since Keys

// can be reused, the contents could change between the find() and the erase()

}

I also tried defining the map just as:

tbb::concurrent_hash_map my_map;

But couldn't get this working as Item has dynamically allocated data members in it, and despite putting in a copy constructor and assignment operators couldn't get it to work.

thanks for any assistance.

Simon Cope

0 Kudos
25 Replies
Alexey-Kukanov
Employee
220 Views

Let me check my understanding. So you are saying that a thread, after acquiring segment and chain lock and finding the item of interest, would not acquire item lock but instead queue itself into a waitlist for the item, and release those locks. And then what? Before starting to acquire the item lock, the thread will need to check if the lock is still valid, which in your vision seem to be equivalent of the thread still being in the waitlist. The question is, how the waitlist consistency is preserved - by yet another lock?

Yes, I think you could try. The current code of the hash map should be platform-neutral I think, except might be for some workarounds of platform-specific issues. For atomic operations, tbb::atomic should suffice. If some loads or stores need fences, there are __TBB_load_with_acquire and __TBB_store_with_release functions for that purpose.

0 Kudos
RafSchietekat
Valued Contributor III
220 Views
Sorry for the delay... I made the changes earlier today, which wasn't too difficult after all because I leveraged spin_rw_mutex as a black box and just built on top of it, the result of which seems to pass muster with test_concurrent_hash_map.exe, apparently without performance penalty. It just adds a few atomics (waiting set count and vacate instruction, which should probably be packed together with spin_rw_mutex), and does some spinning of its own.

0 Kudos
Alexey-Kukanov
Employee
220 Views

Raf,

We received your contribution and Anton is evaluating it; and we will share the results here. Thank you very much for providing your feedback and ideas, and demonstrating a good example of community collaboration.

0 Kudos
Anton_M_Intel
Employee
220 Views
So, thank you, Raf, for fruitful discussion and contribution. Now, I see that segments can be overpopulated in some cases at maximum on (threads-1) items due to "wrong false" inspired by my_logical_size. But it's ok for the moment.
We fixed "guarantee that erasing by accessor will erase the exact element the accessor points to" and deadlock which occurs for insert() while holding another accessor by the same thread and concurrently looking for the same existing item.
I believe changes will be included in coming development release. There is also may be performance benchmarks in which indicated that the contribution is slower than current version of the container.
I'm not sure whether "garbage collector" for the container and optional "blocking style" of erasing will be implemented because they require additional member of node structure at least. Though, I think they could be useful.

0 Kudos
RafSchietekat
Valued Contributor III
220 Views
I made some changes that should make concurrent_hash_map unblockable (except between threads directly competing for the same key of course, whereas now, e.g., thread 1 holding an accessor lock can block thread 2 competing for the same lock, so far so good, which in turn blocks any thread trying to change the same chain or trying to grow the same segment, which in turn blocks any thread trying to even just read from the same segment), and will also have lower latency while another thread is concurrently reallocating a segment array (using fine-grained locking), all while restoring the guarantee that a key exists at most once in the context of a concurrent_hash_map.

Unfortunately, the target has moved in tbb20_20080512oss_src.

Before I try to merge both if I still feel like doing so myself, here are my changes relative to tbb20_20080408oss_src, for anyone interested.

(Added) Also see thread "Possible deadlock in TBB concurrent_hash_map".

0 Kudos
Reply