- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'm working on a game for the Intel Game Demo Contest. I'm using OpenGL.
I've recently updated the code to use vertex buffer objects (I was using vertex arrays previously). With the nVidia and ATI cards that I tested, my code works fine. However, I'm having problems running it with an Intel 965Q. Under Windows XP, the code crashes if I use glMapBufferARB. Under Linux, with the DRI drivers, the code doesn't crash, but code that uses indexed data (glDrawElements) simply doesn't display anything (the code that uses glDrawArray seems to work).
There's a possibility that I'm doing something wrong and the nVidia/ATI drivers are just more forgiving than the Intel ones, but my code is pretty straightforward - it's straight out of the GL_ARB_vertex_buffer_object specification.
Are there known issues with the vertex buffer objects support on Intel, with OpenGL?
I'm almost going back to vertex arrays...
Thank you,
Mauro
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Mauro,
There's probably something being done wrong in your usage. The Linux and Windows drivers are done by two seperate teams, pretty much independent of each other. If they're both showing problems with your application, it's more likely to be an app issue than a problem in each driver. Under Linux, you ought to be able to use Mesa with no HW acceleration and see if it works... that might help narrow down the problem as well.
Thanks,
Chris
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Here's a bit of code that crashes, straight out of the game source code:
glBindBufferARB(GL_ARRAY_BUFFER_ARB, ribbon_vertex_vbo_id);
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, ribbon_quad_vbo_id);
glVertexPointer(3, GL_FLOAT, sizeof *vertices, &((struct vertex *)NULL).position);
glNormalPointer(GL_FLOAT, sizeof *vertices, &((struct vertex *)NULL).normal);
glTexCoordPointer(2, GL_FLOAT, sizeof *vertices, &((struct vertex *)NULL).texuv[0]);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
gl_vertices = (struct vertex *)glMapBufferARB(GL_ARRAY_BUFFER_ARB,
GL_READ_WRITE_ARB);
if (gl_vertices) {
initialize_outer_ribbon_vertices(gl_vertices, x_offset);
if (glUnmapBufferARB(GL_ARRAY_BUFFER_ARB)) {
glDrawElements(GL_QUADS, NUM_OUTER_RIBBON_INDICES, GL_UNSIGNED_INT, 0);
}
}
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
Behavior on Intel/Linux: doesn't crash, but doesn't display anything.
In fact, I have written a self-contained application that displays a single quad and crashes on Windows XP, with Intel. I can produce the code if there's any interest.
I won't rule out the possibility that there's something wrong with the application code, but I'm finding that increasingly unlikely. I've spoken with other developers that faced the same sort of problems. I'm going back to vertex arrays.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Mauro,
If you've got a little app that can demonstrate this on Windows, send it on over! Essentially, if I can get a test case to show the driver developers that something wrong, the easier it is to get a fix.
Chris
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Will see if I can get more information / repro case out of it.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page