I am working on an embedded OpenGL graphics application running on an Intel Atom z530 with the GMA500 graphics hardware. (It's my understanding that the GMA500 is a PowerVR under the hood, but I'm not sure). I'm running with the Tungsten Graphics "Gallium" driver on Ubuntu 9.10 Karmic Koala. Oh, you should also know that i have 1 GB of available system memory.
Here's the problem: I have code that allocates a bunch of 512x512x32 textures (about 1MB apiece). When I get to about 118-120 of these, I get an "out of memory" error from OpenGL, and I also get this message on the console: "error: INTEL_ESCAPE_ALLOC_REGION failed".
This, along with simple measurements while looking at "top", indicate to me that I'm hitting up against an ~128MB limit for textures. The odd thing is this: this architecture doesn't have dedicated video ram, it's shared. And I can tell for sure that OpenGL is using system ram for the textures because I can see the "free" ram going down in 'top'. So why would I get an 'out of memory' error? I would expect opengl to simply use more of my available system ram. Why would there be such a hard limit? Is there some way to change what this apparent "hard limit" is set to?
Yes, the GMA500 is a PowerVR core.
Although the video memory is allocated off the system memory, there is usually a limit set through the BIOS that caps the amount the graphics driver / part can steal for video allocation. You should check the BIOS configuration to see if you can increase this limit.
I'm looking for an OpenGL driver expert to discuss this with at the moment, thx for your patience.
In the mean time, when you say you have 1Gb of available system memory, is that before running the program or at the point where it fails?
There's been some discussion about this, and there is a possibility that between mip map generation and memory stride requirements there may be more than 1MB being used for each 512x512x32 texture. From your description it sounds like a simple test for you would be to try allocating 64x64x32 or 128x128x32 or 256x256x32textures and calculate how much memory you getto usebefore running out.
This might give us some more clues and maybe lead you to an optimum size for textures in your app.
Dimensions: # of Textures Created: Approx. Mem Usage (MB):32x32x16 42,351 86.734864x64x16 1,982 16.2365128x128x16 1,982 64.9462256x256x16 1000 131.072512x512x16 247 129.4991024x1024x16 60 125.8292048x2048x16 13 109.052
131072 is actually 128*1024 which is exactly 128MB, so it does look likethat isstill your limiting value.
It might be that the driver is limiting what it is using, rather than using the arpeture size set in the BIOS. You might try using a different driver to see if that is more lenient.
Assuming you can't try that, I havethese questions to get a clearer feel for what you are doing:
1: Are you using all those textures at the same time?
2: Are you using managed textures?
3: do the textures get updated frequently?
Thats great news about the extra memory from mipmaps. You should also consider that there is probably some driver overhead in GPU ram, as well as any render targets you have, and vertex / index buffers which I assume you use for rendering. Also theres probably shaders and other bits & bobs using up the remaining memory.
Something that might help you. The numbers in my math are made up but you could substitute your own values. It occurs to me that if you are displaying, say, 10x10 textures on a 1280x1024 screen, then each tile is only 128x102. Todisplay those you dont need 512x512 textures (128x128 would be ok without losing detail onscreen). 128x128 would also be faster to draw since the stride across video ram as you stepfrom pixel to pixelis smaller, improving cache coherency.
As the user changes zoom, you might check for a given texture what the screen resolution of a texel is (divide size on screen by texture size). If theresult is very small, then delete the texture you have on the card and upload a smaller version of it, if the result is large, upload a bigger version. You dont have to do this for every texture on every frame, just having a cycle running that checks 1 visible texture per frame should do it. Since you are only displaying a handfull of polys, there should be time to change at least 1 in a frame. You could experiment to see what the best update per frame is, trading off speed vs. visual effect.
Doing it this way, assuming you are doing a flat grid andnot something fancy with 3d, you would never (in theory) need more gpu memory than the equivalent of a couple of screenfulls.
I think this is how I'd write it.
Hope this helps.