- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Link Copied

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hi all,

I have used some new features of tbb2.1. There is significant performnace improvement for concurrent_vector & concurrent_hash_map. Ihave tried to check the performance of scalable_allocator but does not find any memory or speedimprovement. I am unable toenvoke some methods of concurrent_hash_map

1. insert method with accessor & value_type argument

2. insert method with value_type argument.

Can u please, explain in brief how to use these methods

Has tbbmalloc implemented malloc functionin it?

Also, is there any default implementation of HashCompare for concurrent_hash_map whose key is of type int or double?

Is there any speed or memory overhead if I return the same value in the hash function of hashcompare struct

e.g. static size_t hash(const int& x){size_t h = x; return h;} instead of more complex hash function.

Thanks in advance

nitinsayare

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

nitinsayare:I am unable toenvoke some methods of concurrent_hash_map

1. insert method with accessor & value_type argument

2. insert method with value_type argument.

value_type actually is a std::pair of the key and the actual value. So you use it like this:

my_hash_map.insert( std::make_pair(my_new_key,my_new_data) );

nitinsayare:Has tbbmalloc implemented malloc functionin it?

No. You should call scalable_malloc and scalable_free instead of malloc and free. The replacement is not provided on purpose, because it could create additional problems if done blindly, while it would not be hard for any user to do such a replacement where necessary, e.g. with #define malloc scalable_malloc

nitinsayare:Also, is there any default implementation of HashCompare for concurrent_hash_map whose key is of type int or double?

No, there are no default implementation of HashCompare.

nitinsayare:Is there any speed or memory overhead if I return the same value in the hash function of hashcompare struct e.g. static size_t hash(const int& x){size_t h = x; return h;} instead of more complex hash function.

It depends on the actual distribution of keys in your program. If the low bits of the keys are of sufficient variety, there should be no issues with this straightforward implementation. If they aren't (e.g. low bits are zeroed for all keys), the hash function will provide bad distribution of elements across buckets in the map thusaffecting access speed.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

It depends on the actual distribution of keys in your program. If the low bits of the keys are of sufficient variety, there should be no issues with this straightforward implementation. If they aren't (e.g. low bits are zeroed for all keys), the hash function will provide bad distribution of elements across buckets in the map thusaffecting access speed.

As per the standard STL's map implementation, I see that number of buckets are in prime number. Each item that is being inserted into map is randomly put in one of these buckets. For instance, if the number of buckets in map is 157 (a prime number), then the new item will go in either of the 157 buckets - thus creating a proper distribution of map elements.

In STL this is achieved by just NEW_ITEM **%** BUCKET_SIZE for all integral types, and some calculation for string types. For user-defined types, we may need to implement the same - but that's not point here.

Herepoint is that if I return (from 'hash' method) some constant value (1 for instance), then there would be ONLY 1 bucket, and thus there is performance penalty. If I return the same number from hash function, there would be n-number of buckets for the map-container having n items - the whole purpose of map is lost in both cases.

And I do not seriously understand what Loword and Hiword come into picture and how? What's the internal implementation for concurrent_map ?

Just wondering why TBB hasn't provided default hash function?

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

I am getting lost. Are we discussing map or hash map? TBB does not provide map, while the standard does not specify a hash map. What did you refer to as "the standard STL's map implementation" (not even asking why any particular implementation should be considered "standard")?

The whole purpose of hash_map, in my opinion, is to find an element in amortized constant time. Why did you say that the map of N buckets for N elements loses its purpose?

Now aboutimplementation of tbb::concurrent_hash_map and why low bits are important. TBB's concurrrent_hash_map does not have a fixed number of buckets. Instead, it grows dynamically as the number of elements in the container increases. For simplicity, let's say that at any given time thereare2^N number of buckets (in reality, it's more complex now; you might learn the details from the source code). The bucket to use is defined by taking the lowest N bits of the hash value. Thus uniform distribution in low bits of the hash values is important for tbb::concurrent_hash_map.

As for the default hash function: we have heard your feedback and will consider it.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page