Community
cancel
Showing results for 
Search instead for 
Did you mean: 
chrisanderman
Beginner
508 Views

OpenGL out of memory error when exceeding 128MB of textures

Hello there,

I am working on an embedded OpenGL graphics application running on an Intel Atom z530 with the GMA500 graphics hardware. (It's my understanding that the GMA500 is a PowerVR under the hood, but I'm not sure). I'm running with the Tungsten Graphics "Gallium" driver on Ubuntu 9.10 Karmic Koala. Oh, you should also know that i have 1 GB of available system memory.

Here's the problem: I have code that allocates a bunch of 512x512x32 textures (about 1MB apiece). When I get to about 118-120 of these, I get an "out of memory" error from OpenGL, and I also get this message on the console: "error: INTEL_ESCAPE_ALLOC_REGION failed".

This, along with simple measurements while looking at "top", indicate to me that I'm hitting up against an ~128MB limit for textures. The odd thing is this: this architecture doesn't have dedicated video ram, it's shared. And I can tell for sure that OpenGL is using system ram for the textures because I can see the "free" ram going down in 'top'. So why would I get an 'out of memory' error? I would expect opengl to simply use more of my available system ram. Why would there be such a hard limit? Is there some way to change what this apparent "hard limit" is set to?

Thanks!

Chris

0 Kudos
12 Replies
508 Views

Hi Chris,

Yes, the GMA500 is a PowerVR core.

Although the video memory is allocated off the system memory, there is usually a limit set through the BIOS that caps the amount the graphics driver / part can steal for video allocation. You should check the BIOS configuration to see if you can increase this limit.

Cheers,
-Ganesh
chrisanderman
Beginner
508 Views

Hi Ganesh!
I checked my BIOS and the only option I see is to set the AGP aperture size. It was already set to 256MB, and the only other option was 128MB, so I left it alone.
Is there anything else I can do?
Thanks!
Chris
Stephen_H_Intel
Employee
508 Views

Hi Chris

I'm looking for an OpenGL driver expert to discuss this with at the moment, thx for your patience.

In the mean time, when you say you have 1Gb of available system memory, is that before running the program or at the point where it fails?

Thx

Steve

chrisanderman
Beginner
508 Views

Hi Steve,
That sounds great, I really appreciate the help!
Our device has 1GB of total physical RAM onboard. At the point of failure, there is significantly less free RAM, but my tests have shown there is still plenty there (several hundred MB).
Thanks,
Chris
Stephen_H_Intel
Employee
508 Views

Hi Chris

There's been some discussion about this, and there is a possibility that between mip map generation and memory stride requirements there may be more than 1MB being used for each 512x512x32 texture. From your description it sounds like a simple test for you would be to try allocating 64x64x32 or 128x128x32 or 256x256x32textures and calculate how much memory you getto usebefore running out.

This might give us some more clues and maybe lead you to an optimum size for textures in your app.

Cheers

Steve

chrisanderman
Beginner
508 Views

Hi Steve,
I ran the test that you described. Note that I am now using 16-bit textures (GL_RGBA4 internalFormat), as 32-bit is really overkill for what I'm doing. My test allocates textures until I get an "out of memory" gl error. The approximate memory use was calculated like this: texture_size * texture_size * 2 * num_textures.
Results:
Dimensions: # of Textures Created: Approx. Mem Usage (MB):
32x32x16 42,351 86.7348
64x64x16 1,982 16.2365
128x128x16 1,982 64.9462
256x256x16 1000 131.072
512x512x16 247 129.499
1024x1024x16 60 125.829
2048x2048x16 13 109.052
Interestingly, I peek just over my assumed 128MB limit for the 256 case, so I guess I was wrong about that limit!
Could you tell me more about "mip map generation and memory stride requirements"?
Thanks!
Chris
jeffreylee2011
Beginner
508 Views

I too think that there would be any problem with BIOs
chrisanderman
Beginner
508 Views

Hi Jeff, could you explain what you mean? I didn't understand your post.
Stephen_H_Intel
Employee
508 Views

Hi Chris

131072 is actually 128*1024 which is exactly 128MB, so it does look likethat isstill your limiting value.

It might be that the driver is limiting what it is using, rather than using the arpeture size set in the BIOS. You might try using a different driver to see if that is more lenient.

Assuming you can't try that, I havethese questions to get a clearer feel for what you are doing:
1: Are you using all those textures at the same time?
2: Are you using managed textures?
3: do the textures get updated frequently?

Regards

Steve


Hadi_Setyono
Beginner
508 Views

Hi Steve,
That sounds great, I really appreciate the help!
Our device has 1GB of total physical RAM onboard. At the point of failure, there is significantly less free RAM, but my tests have shown there is still plenty there (several hundred MB).
Thanks,
Chris

yes of course there is 1 GB total RAM onboard. when it is point of failure, there is also a plenty several hundred MB.

Thanks in advance,

Hadi Setyono

chrisanderman
Beginner
508 Views

Hi Steve,
As it turns out, memory was getting reserved for mipmaps even though I never turned this on (I guess this is default opengl behavior). I turned off this mipmap generation and I've now been able to allocate about 173 MB worth of textures (512x512x16). So that's good news! I can post the full table of results if you want (let me know). 512 size textures seemed to let me use the most RAM though.
As far as the driver goes, I'm pretty sure we are stuck with the one we've got.
To answer your questions:
1. In the worst case scenario, yes. If the user "zooms" out enough, I have to render all of these texures. They form a flat grid of "tiles".
2. I'm not sure what "managed textures" means, haha. I'm just using plain old opengl textures, as far as I know.
3. Most of the textures don't get updated frequently. Usually only one is getting updated (and even then not every frame).
Thanks!
-Chris
Stephen_H_Intel
Employee
508 Views

Hi Chris

Thats great news about the extra memory from mipmaps. You should also consider that there is probably some driver overhead in GPU ram, as well as any render targets you have, and vertex / index buffers which I assume you use for rendering. Also theres probably shaders and other bits & bobs using up the remaining memory.

Something that might help you. The numbers in my math are made up but you could substitute your own values. It occurs to me that if you are displaying, say, 10x10 textures on a 1280x1024 screen, then each tile is only 128x102. Todisplay those you dont need 512x512 textures (128x128 would be ok without losing detail onscreen). 128x128 would also be faster to draw since the stride across video ram as you stepfrom pixel to pixelis smaller, improving cache coherency.

As the user changes zoom, you might check for a given texture what the screen resolution of a texel is (divide size on screen by texture size). If theresult is very small, then delete the texture you have on the card and upload a smaller version of it, if the result is large, upload a bigger version. You dont have to do this for every texture on every frame, just having a cycle running that checks 1 visible texture per frame should do it. Since you are only displaying a handfull of polys, there should be time to change at least 1 in a frame. You could experiment to see what the best update per frame is, trading off speed vs. visual effect.

Doing it this way, assuming you are doing a flat grid andnot something fancy with 3d, you would never (in theory) need more gpu memory than the equivalent of a couple of screenfulls.

I think this is how I'd write it.

Hope this helps.

Steve

Reply