Intel® oneAPI Threading Building Blocks
Ask questions and share information about adding parallelism to your applications when using this threading library.
2464 Discussions

concurrent_hash_map parallel operations and complexity

AJ13
New Contributor I
257 Views
Hey,

As you can probably tell by my recent posts, today is international parallel data structure selection day!

I'm off to my next questions... how parallel operations are supported in concurrent_hash_map. Specifically, what complexity guarantees are there in place for the various operations. Also, whether or not these complexities vary with the number of processors.

A lot of the container documentation mentions concurrent access, but I find the docs a bit lacking with regards to how this concurrency is implemented. I'm evaluating the overall complexity of some of my algorithms, and this information would be invaluable to design at this point.

Basically I am looking for something where I can find the smallest element in short order, insert data in parallel (as in without mutexes)... and basically have as many aspects of the thing operating in parallel... of course this might only be required if the size of the container exceeds a certain boundary.

I just printed a paper on parallel priority queues which I'll be looking at, as they compare to serial implementations.

Thanks for your help everyone.

AJ
0 Kudos
1 Reply
Anton_M_Intel
Employee
257 Views
Ok, if you are interested in details of concurrent_hash_map design, I can shed some light.

The container has three levels of data structures. The first level is an array of fixed number (64) of segments, each of which contains the mutex and pointer to an array of chains. The array of chains grows by power of two as necessary. A chain is a structure with mutex and pointer to data items that form a linked list. Each item contains personal mutex along with user's data.
For each operation, hash value is calculated for a key. The lowest bits represent segment number and the following bits are used to select a chain. Then the list is scanned until the key is matched, and a good hash function should prevents these lists from getting too long.

This structure is efficient only for big number of stored keys (about few thousand) and with good hash function provided by user.
0 Kudos
Reply