Intel® oneAPI Threading Building Blocks
Ask questions and share information about adding parallelism to your applications when using this threading library.
2464 Discussions

TBB 2.1, anyone tried the new features yet?

robert-reed
Valued Contributor II
348 Views
It's been a couple of weeks since the last OSS update and since Dave Sekowski announced the new features being prepared for release as Intel Threading Building Blocks 2. Andrey Marochko and Ihave been blogging about some of the new features. Has anyone else had a chance to try them. Any impressions, favorable or not?
0 Kudos
4 Replies
nitinsayare
Beginner
348 Views

Hi all,

I have used some new features of tbb2.1. There is significant performnace improvement for concurrent_vector & concurrent_hash_map. Ihave tried to check the performance of scalable_allocator but does not find any memory or speedimprovement. I am unable toenvoke some methods of concurrent_hash_map

1. insert method with accessor & value_type argument

2. insert method with value_type argument.

Can u please, explain in brief how to use these methods

Has tbbmalloc implemented malloc functionin it?

Also, is there any default implementation of HashCompare for concurrent_hash_map whose key is of type int or double?

Is there any speed or memory overhead if I return the same value in the hash function of hashcompare struct

e.g. static size_t hash(const int& x){size_t h = x; return h;} instead of more complex hash function.

Thanks in advance

nitinsayare

0 Kudos
Alexey-Kukanov
Employee
348 Views
nitinsayare:

I am unable toenvoke some methods of concurrent_hash_map

1. insert method with accessor & value_type argument
2. insert method with value_type argument.

value_type actually is a std::pair of the key and the actual value. So you use it like this:

my_hash_map.insert( std::make_pair(my_new_key,my_new_data) );

nitinsayare:
Has tbbmalloc implemented malloc functionin it?

No. You should call scalable_malloc and scalable_free instead of malloc and free. The replacement is not provided on purpose, because it could create additional problems if done blindly, while it would not be hard for any user to do such a replacement where necessary, e.g. with #define malloc scalable_malloc

nitinsayare:
Also, is there any default implementation of HashCompare for concurrent_hash_map whose key is of type int or double?

No, there are no default implementation of HashCompare.

nitinsayare:
Is there any speed or memory overhead if I return the same value in the hash function of hashcompare struct e.g. static size_t hash(const int& x){size_t h = x; return h;} instead of more complex hash function.

It depends on the actual distribution of keys in your program. If the low bits of the keys are of sufficient variety, there should be no issues with this straightforward implementation. If they aren't (e.g. low bits are zeroed for all keys), the hash function will provide bad distribution of elements across buckets in the map thusaffecting access speed.

0 Kudos
nitinsayare
Beginner
348 Views

It depends on the actual distribution of keys in your program. If the low bits of the keys are of sufficient variety, there should be no issues with this straightforward implementation. If they aren't (e.g. low bits are zeroed for all keys), the hash function will provide bad distribution of elements across buckets in the map thusaffecting access speed.

As per the standard STL's map implementation, I see that number of buckets are in prime number. Each item that is being inserted into map is randomly put in one of these buckets. For instance, if the number of buckets in map is 157 (a prime number), then the new item will go in either of the 157 buckets - thus creating a proper distribution of map elements.

In STL this is achieved by just NEW_ITEM % BUCKET_SIZE for all integral types, and some calculation for string types. For user-defined types, we may need to implement the same - but that's not point here.

Herepoint is that if I return (from 'hash' method) some constant value (1 for instance), then there would be ONLY 1 bucket, and thus there is performance penalty. If I return the same number from hash function, there would be n-number of buckets for the map-container having n items - the whole purpose of map is lost in both cases.

And I do not seriously understand what Loword and Hiword come into picture and how? What's the internal implementation for concurrent_map ?

Just wondering why TBB hasn't provided default hash function?

0 Kudos
Alexey-Kukanov
Employee
348 Views

I am getting lost. Are we discussing map or hash map? TBB does not provide map, while the standard does not specify a hash map. What did you refer to as "the standard STL's map implementation" (not even asking why any particular implementation should be considered "standard")?

The whole purpose of hash_map, in my opinion, is to find an element in amortized constant time. Why did you say that the map of N buckets for N elements loses its purpose?

Now aboutimplementation of tbb::concurrent_hash_map and why low bits are important. TBB's concurrrent_hash_map does not have a fixed number of buckets. Instead, it grows dynamically as the number of elements in the container increases. For simplicity, let's say that at any given time thereare2^N number of buckets (in reality, it's more complex now; you might learn the details from the source code). The bucket to use is defined by taking the lowest N bits of the hash value. Thus uniform distribution in low bits of the hash values is important for tbb::concurrent_hash_map.

As for the default hash function: we have heard your feedback and will consider it.

0 Kudos
Reply