- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I currently use the concurrent_hash_map, and, seeing the poor performances of my app, I profiled a little my code and see that 30% of the time is spent if the lookup. So, I was wondering if there is a way to know the number of collisions in the hash map? There is nothing about this in the documentation. I also made some grep into the tbb sources, but I found nothing.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
An addition to what Anton said: there is a so-called "performance warning" written by TBB 2.1 concurrent_hash_queue to stderr. The warning is enabled by defining TBB_DO_ASSERT or TBB_PERFORMANCE_WARNINGS macro to a non-zero value. The distribution of items across the hash map is (post-)evaluated when the map is cleared or destroyed, and if it seems bad the warning is issued. The map should have at least 3072 elements; otherwise it is considered too small to judge. Note that absence of the warning does not mean the distribution is good; it just might be not bad enough to deserve a warning. We decided to not make it too zealous because some irregularity in hash value distributionwill always exist.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
An addition to what Anton said: there is a so-called "performance warning" written by TBB 2.1 concurrent_hash_queue to stderr. The warning is enabled by defining TBB_DO_ASSERT or TBB_PERFORMANCE_WARNINGS macro to a non-zero value. The distribution of items across the hash map is (post-)evaluated when the map is cleared or destroyed, and if it seems bad the warning is issued. The map should have at least 3072 elements; otherwise it is considered too small to judge. Note that absence of the warning does not mean the distribution is good; it just might be not bad enough to deserve a warning. We decided to not make it too zealous because some irregularity in hash value distributionwill always exist.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'm curious about both laboratory and real-world comparative benchmark results for this version (after the hash code gets fixed, because both implementations rely on a decent hash function to be able to use a power-of-two array size).
(Added) Actually the patch may well provide even significantly better behaviour with a bad hash function (it will not block even if the hash function evaluates to a constant), but that's rather irrelevant, I would say, if you can still intervene.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
An addition to what Anton said: there is a so-called "performance warning" written by TBB 2.1 concurrent_hash_queue to stderr. The warning is enabled by defining TBB_DO_ASSERT or TBB_PERFORMANCE_WARNINGS macro to a non-zero value. The distribution of items across the hash map is (post-)evaluated when the map is cleared or destroyed, and if it seems bad the warning is issued. The map should have at least 3072 elements; otherwise it is considered too small to judge. Note that absence of the warning does not mean the distribution is good; it just might be not bad enough to deserve a warning. We decided to not make it too zealous because some irregularity in hash value distributionwill always exist.
I think it would be useful togive the posibility to the user to reduce/increase the 3072 threshold. What do you think?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The map should have at least 3072 elements; otherwise it is considered too small to judge.
I think it would be useful togive the posibility to the user to reduce/increase the 3072 threshold. What do you think?
I don't think any preprocessor macro makes sense here. concurrent_hash_map wins a serial container with a lock around it only while holding a couple of thousand elements inside (in simple benchmarks).
We are looking for better algorithms anyway.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
What for? Making this value much lower does not make much sense, because of significantly reduced accuracy of the statistics; I feel that even this value is rather low. Making it higher does not make much sense either, all keys in the map will be taken into account for distribution estimation. The only reason would be to shut up the warning by increasing the threshold beyond the actual amount of keys, but the TBB_PERFORMANCE_WARNING macro is much more logical way to achieve the same. So what user value such a knob would add?
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page