I'm working with large dataset. I'm using several concurrent_vector and concurrent_hash_map for that purpose. After running for a while I'm getting bad_alloc exception.
According to this answer
concurrent_vector, I assume, is adding new blocks of memory but keeps using the old ones. Not moving objects is important as it allows other threads to keep accessing the vector even as it is being re-sized. It probably also helps with other optimizations (such as keeping cached copies valid.) The downside is access to the elements is slightly slower as the correct block needs to be found first (one extra deference.)
So there's a chance I'm getting bad_alloc due to heap fragmentation. How can I avoid heap fragmentation?
Concurrent_vector doesn't generlly release any memory. It just grabs more blocks when having to grow the list. Since generally memory is not released, there's little chance that use of a concurrent vector could result in heap fragmentation. Could it be that you're just running out of memory? If it were not the case that concurrent_vector held onto its memory to preserve cached lines accessing contained data (if it was using realloc for example), then there might be a possibility of heap fragmentation. But generally it doesn't, so there isn't.
Do you use tbbmalloc together with these containers?
Sorry, I'm a newbie bot at C++ and TBB. Are you suggesting that have I linked with -ltbbmalloc? Yes I've linked ot that lib.