Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
16 Views

concurrent_queue not releasing memory?

[Note: This thread is the narrowed-down version of another thread; http://software.intel.com/en-us/forums/showthread.php?t=66469page/1/#87679, ]

It appears that if one grows and then shrinks a concurrent_queue (i.e., if the readers fall behind but then catch up), the concurrent_queue holds on to the memory it allocated to get to peak capacity.

unsigned int n=90000000;
tbb::concurrent_queue q;
for(unsigned int i=0; i
q.push(i);
}
cout << "queue full" << endl;
sleep(10); // program size at this pt: roughly 1.2G
cout << "start popping" << endl;
for(unsigned int i=0; i
double v;
q.pop(v);
}
cout << "done popping" << endl;
sleep(3600); // program still using 1.2G


Is this correct? And is there a suggested workaround that people use?


0 Kudos
3 Replies
Highlighted
Black Belt
16 Views

It's probably a different issue (high-water mark behaviour of scalable memory allocator), discussed elsewhere. In this case the program is triggering the problem by not throttling the input, which may also adversely affect performance by being all over the place in RAM, or worse.

0 Kudos
Highlighted
16 Views

To be honest, the general behavior of a container not releasing memory after it has been allocated is a C++ classic.

It's one of those terribly non-obvious things of the language, but a pattern that is used all over the place, and even expected. For example even every C++ programmer's favorite friends, std::vector & std::string, do this. Even if you do vector.resize(0), the vector will not free any memory! For more information check out Meyers Effective STL's Item 17: the Swap Trick.

0 Kudos
Highlighted
Black Belt
16 Views

I have not yet disected or otherwise scrutinised concurrent_queue (but just have a look at micro_queue_pop_finalizer), so I purposefully used the word "probably", but this is also easily tested...

0 Kudos