Developing Games on Intel Graphics
If you are gaming on graphics integrated in your Intel Processor, this is the place for you! Find answers to your questions or post your issues with PC games

OpenGL memory leak?

bms20
Beginner
2,526 Views
Hi,
I'm having trouble attempting to track down a memory leak in OpenGL on the 8.15.10.2559 drivers. I'm writing code that does a lot of texture map uploading, and recycling of texture ids, for an application which is used in digital signage; so naturally, leaking is not acceptable.
I believe that the driver is not releasing graphics memory after re-using a texture name, but am having difficulty in tracing down the root cause of the problem. It seems that the problem ONLY occurs on windows 7 (I am testing with 64bit, but my application is 32bit). Furthermore, the usage pattern (i.e. of texture allocation / deallocation) seems to greatly affect how quickly the memory is lost.
I do not have this problem with NVidia, or with ATI, nor do I have the problem with other Intel graphics drivers on Windows. Note - the previous versions of the HD2000 driver also exhibited this problem. Linux (all hardware) also does not show this problem.
Using Microsoft's umdh, I find that all of my traces start with something akin to this:
+ 1137408 ( 6864428 - 5727020) 46939 allocs BackTraceFC6BA0
+ 20422 ( 46939 - 26517) BackTraceFC6BA0 allocations
ntdll!RtlAllocateHeap+00000274
KERNELBASE!GlobalAlloc+0000006E
+ 228204 ( 526744 - 298540) 4604 allocs BackTraceFC8B34
+ 2113 ( 4604 - 2491) BackTraceFC8B34 allocations
ntdll!RtlAllocateHeap+00000274
KERNELBASE!GlobalAlloc+0000006E
ig4icd32!???+00000000 : 13F3F78D
I.e. a large chunk of memory which is lost in GlobalAlloc, and a smaller chunk lost at about a factor of 10 less allocations in the intel opengl driver. Since the code is compiled without frame pointer, I cannot see which functions below GlobalAlloc and ig4icd32 have been called.
Is there any way in which I can see what function in ig4icd32 called GlobalAlloc?
Does anyone have any good suggestions as to how to investiage this?
Thanks in advance,
-bms
0 Kudos
14 Replies
bms20
Beginner
2,526 Views
I have analyzed (and worked around) this problem now.

It appears that modfying the data pointed to by a texture map after calling glTexImage2D causes the driver to leak memory. In my code path things are a bit more complex; namely, I am texture mapping from a shared memory segment created using CreateFileMapping and MapViewOfFile.
I suspect that the driver is performing copy on modify under the hood, then somehow fails to free the copied memory. This makes sense since the graphics chip is effectively a UMA design, and therefore, can treat system memory like a texture map - thus avoiding any copy and upload process - provided that the texture map is not modified prior to rendering.
The solution was to insert a glFinish() immediately after each texture upload. Its a performance hit, but it does solve this problem.
Hope this helps someone else out there!
-bms
0 Kudos
jdstanhope
Beginner
2,526 Views
I am seeing something very similar with a setup that includes a Intel HD Graphics 3000 chip and an nVidia 1000m on a Lenovo W520. The version of the driver is 8.15.10.2321. I am also on Windows 7 64bit but my application is 32bit.
0 Kudos
SergeyKostrov
Valued Contributor II
2,526 Views
>>...Is there any way in which I can see what function in ig4icd32 called GlobalAlloc?..

It is hard to believe that Intel will release even a small piece of codes of the driver.

Intel's engineerscould investigate it and, of course, a test case will speed up the investigation. You know,
that sometimes it is not easy to reproduce a problem.

>>...Does anyone have any good suggestions as to how to investiage this?..

If the driver was built with MS Visual Studio 20xx it should have a Program Database file ( *.pdb \ PDB).
There are also PDBs for every version of Windows operating system ( check for Microsoft's website ).

In case you have both, foran operating systemand for the driver, it is a little bit easier to investigate.

Best regards,
Sergey
0 Kudos
bms20
Beginner
2,526 Views

It appears that inserting glFinish() calls into my code paths seems to resolve the problem.

It is unclear what Intel's opengl driver does with memory once you hand it to the GL using glTexImage2D or using glTexSubImage2D uploads.

-bms
0 Kudos
SergeyKostrov
Valued Contributor II
2,526 Views
>>...It appears that inserting glFinish() calls into my code paths seems to resolve the problem...

Congratulations! You've just proven that NVIDIA's statement "Never Call glFinish()" is not right in some cases. :)

Take a look at NVIDIA's "GPU Programming Guide" ( version 2.4.0 \ 2005 year ):

...
8.6.10 Never Call glFinish() 67
...

Best regards,
Sergey
0 Kudos
bms20
Beginner
2,526 Views
Well, I shouldn't be calling glFinish. All that does is bottleneck things.

And on NVidia/ATI and other Intel drivers I don't need to call glFinish; everything works as expected.
This is simply an annoying fix for problems on Intel / HD / OpenGL.
What would be preferable is if there was access to a debug driver for HD graphics, and/or access to a developer's forum where I can interact with their driver developers.
-bms
0 Kudos
SergeyKostrov
Valued Contributor II
2,526 Views
>>...developer's forum where I can interact with their driver developers...

That's a good idea. Why wouldn't you try to contact somebody at Intel who are in chargefor the Intel Software Network?

Best regards,
Sergey
0 Kudos
mbaupdates
Beginner
2,526 Views
Thank you guys!
Especially Alfonse answered me perfectly.

Yes, actually reallocation is not allowed in D3D. When reallocating, I have to copy the entire buffer data from system memory to video memory. so I will reserve a proper size of video memory to store the buffer, and use glBufferSubData to update it when new primitives are added. Just like std::vector does.


Top MBA Colleges in India
Business School Ranking
0 Kudos
mbaupdates
Beginner
2,526 Views
I have a decade's experience in D3D, but I'm a novice on OpenGL. My project requires me to use OpenGL as it's cross-platform.

Will glBufferData leak memory?

My program looks like this

glGenBuffers(...)
glBindBuffer()
main_loop
{
glBufferData( GL_ARRAY_BUFFER , size , data , GL_DYNAMIC_DRAW );
draw
}
glDestroyBuffers(...)

As you see, the VBO will be a dynamic one. And the buffer size is always changing.
If the glBufferData will cause memory leak. Will it appear in video memory? Because I have tested it, while memory leak didn't take place in my system memory.

Thank you!

Top MBA Colleges in India
Business School Ranking
0 Kudos
SergeyKostrov
Valued Contributor II
2,526 Views
Quoting mbaupdates
...
Will glBufferData leak memory?
...
main_loop
{
glBufferData( GL_ARRAY_BUFFER , size , data , GL_DYNAMIC_DRAW );
draw
}
glDestroyBuffers(...)

As you see, the VBO will be a dynamic one. And the buffer size is always changing.
If the glBufferData will cause memory leak. Will it appear in video memory? Because I have tested it, while memory leak didn't take place in my system memory.
...


Why do you think that glBufferDatafunctioncauses some memory leaks?

0 Kudos
rgl32
Beginner
2,526 Views
I have tested the above loop on an Intel HD 3000 with driver version: 8.15.10.2559. (see attached file for the exact spec from Glview).

If the size of the data is sufficiently small - fewer than5000 triangles, then the system appears not to leak. However, if i increase the triangle count to 10000 and continue up to 30000 the memory usage increases until the program uses 600mb.

GLfinish - was insufficient to prevent the leak.
If i add a sleep(200) after each render call, then there is no leak - but this isn't really solution. Other GPUs (nvidia, ati) are perfectly happy with sending large data arrays to the gpu.

I used glMapBuffer() instead to send the data to the vbo and this seemed to work fine without any memory leaks. I was just curious to know if the glBufferDataidea posted by Sergeyis a mistake for larger data sizesand if soshould ialways be using glMapBuffer on intel gpus.

thanks

0 Kudos
SergeyKostrov
Valued Contributor II
2,526 Views
What exactly did you mean regarding 'idea'? I'm confused because I've quoted a userand then I've asked a
question about a memory leaks.

Best regards,
Sergey
0 Kudos
rgl32
Beginner
2,526 Views

I apologise Sergey, I must have read the forum post too quickly. I was testing mbaupdates loop concerning glBufferData on an intel hd 3000, in a laptop and on a desktop pc. I found no memory leak on the laptop, but as number of vertices increased, the chance of a memory leak also increased on the desktop pc.

So i guess iam providingmore information on the reproducible bug with glBufferData on some intel hd 3000 cards using mbaupdates glBufferData loop.I wondered if anyone else was experiencing these problems. I fixed them with glMapBuffer, and i wondered if this was recommended practise for intel cards, or whether its a driver bug. In fact, if you don't use glBufferData and instead send the vertices using glVertexAttribPointer then you leak large chunks of memory very quickly. (unless you put a sleep in between each render call)

once again sorry for making the mistake with regard to your forum post Sergey, it was mbaupdates loop i was supposed to be referencing.

0 Kudos
SergeyKostrov
Valued Contributor II
2,526 Views
Quoting rgl32

I apologise Sergey, I must have read the forum post too quickly. I was testing mbaupdates loop concerning glBufferData on an intel hd 3000, in a laptop and on a desktop pc. I found no memory leak on the laptop, but as number of vertices increased, the chance of a memory leak also increased on the desktop pc.

[SergeyK] Thank you for the response.

So i guess iam providingmore information on the reproducible bug with glBufferData on some intel hd 3000 cards using mbaupdates glBufferData loop.I wondered if anyone else was experiencing these problems. I fixed them with glMapBuffer, and i wondered if this was recommended practise for intel cards, or whether its a driver bug.

[SergeyK] Unfortunately, we still don't hear anything from Intel Software Engineers.
...


Best regards,
Sergey

0 Kudos
Reply