Intel® oneAPI Threading Building Blocks
Ask questions and share information about adding parallelism to your applications when using this threading library.

concurrent_queue not freeing memory

bherd80
Beginner
668 Views
Hi,
my program contains a concurrent_queue. A producer thread is pushing event class-pointers to this queue, my consumer thread pops the event pointers, processes them and deletes the object behind.
Deleting the objects is working well, since the destructor is being called. However I have the problem that the concurrent_queue doesn't seem to free the memory it used again. This is a problem for me since I'm adding millions of pointers to the queue which consumes almost 2 GB of memory. I definitely need to free this memory again otherwise my application can't continue.
I looked through the forum and found that this problem has been discussed before however I couldn't find any solution to that. I'm really sorry if a solution has been proposed and I just didn't find it. I would be really happy if someone could give me some advice on how to deal with this problem.
Thanks a lot!
Regards,
Ben
0 Kudos
3 Replies
Krishna_R_Intel
Employee
668 Views
Hi,
I will test these things myself with the producer-consumer example that ships with Intel TBB and report what I find.

Meanwhile, you mention that concurrent_queue doesn't free the memory it used despite deleting the object (also ensuring that the object's destructor is getting called). Is it possible to run your app again and report whether

a) the memory footprint is growing with time?
b) Or is it the case that after a point, it remains constant?

Did you refer to the following threads?
http://software.intel.com/en-us/forums/showthread.php?t=76955&o=d&s=lr

http://software.intel.com/en-us/forums/showthread.php?t=77007&o=d&s=lr

Please let me know if you want to share code samples with us so that we can test it to figure out what exactly is happening. I will take this thread privately in that case.

Thanks,
Krishna



0 Kudos
bherd80
Beginner
668 Views
Hi Krishna,
thanks for your response. I tried to create a small sample application that reproduces the behaviour. It's appended below. The sample application is single-threaded and it seems to have the same problem.I'm simply creating a huge number of Agent instances. Each of them receives 10 messages which are being deleted again afterwards. However the memory consumption stays the same until I finally delete all the agents.
Regarding your questions:
In the example I've created a loop that repeats the whole process five times. In this case I didn't notice any further memory increase except from the one in the beginning. But in my "real-world" application it seems to consume more and more memory at runtime even though everything's being deleted.
I was referring to different threads, but thanks to the pointers. The threads I found were:
Thanks a lot!
Regards,
Ben
class Agent
{
private:
tbb::concurrent_hash_map intVars;
tbb::concurrent_hash_map uintVars;
tbb::concurrent_hash_map doubleVars;
tbb::concurrent_hash_map stringVars;
tbb::concurrent_hash_map pcharVars;
tbb::concurrent_hash_map boolVars;
tbb::concurrent_queue messages;
int i1, i2, i3, i4;
void *v1, *v2, *v3;
public:
Agent()
{
tbb::concurrent_hash_map::accessor a;
boolVars.insert(a, strRumour);
a->second = true;
}
void addMessage(Event* ev)
{
messages.push(ev);
}
bool empty()
{
return messages.empty();
}
Event* popMessage()
{
Event* event = NULL;
messages.try_pop(event);
return event;
}
};
int main()
{
Agent** agents = new Agent*[100000];
// creating the Agent instances
for(int i=0; i<100000; i++)
{
agents = new Agent();
}
for(int n=0; n<5; n++)
{
int createCount=0;
// sending 10 messages to every agent's message queue
for(int i=0; i<100000; i++)
{
for(int k=0; k<10; k++)
{
Event* event = new Event();
agents->addMessage(event);
createCount++;
}
}
cout << createCount << " events created." << endl << flush;
int deleteCount=0;

// deleting messages again
for(int i=0; i<100000; i++)
{
while(!agents->empty())
{
Event* event = agents->popMessage();
if(event)
{
delete event;
deleteCount++;
}
}
}
cout << deleteCount << " events deleted." << endl << flush;
}

// deleting agents
for(int i=0; i<100000; i++)
{
delete agents;
}
return 0;
}
0 Kudos
Alexey-Kukanov
Employee
668 Views

The problem of staying at peak memory consumption is more likely due to TBB allocator behavior than containers. Check whether using std::allocator in the containers helps.

0 Kudos
Reply