- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi all,
I'm using OpenGL to convert video frames from 10-bit YUV420p to 8-bit RGB. YUV frame data is loaded as a texture with:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R16UI, m_frameWidth, m_frameHeight + m_frameHeight / 2, 0, GL_RED_INTEGER, GL_UNSIGNED_SHORT, videoFrame.data());
In the fragment shader it's accessed with:
#version 130 // irrelevant variables definitions here uniform usampler2D frameTex; void main() { // component value is saved on 10 least significant bits, // so to normalize it divide by maximum value that can be coded on 10 bits (2^10 - 1 = 1023) float Y = float(texture(frameTex, vec2(gl_TexCoord[0].s, gl_TexCoord[0].t * YHeight)).r) / 1023.0; float U = float(texture(frameTex, vec2(gl_TexCoord[0].s / 2, UOffset + gl_TexCoord[0].t * UHeight)).r) / 1023.0; float V = float(texture(frameTex, vec2(gl_TexCoord[0].s / 2, VOffset + gl_TexCoord[0].t * VHeight)).r) / 1023.0; gl_FragColor = vec4(HDTV * vec3(Y, U, V), 1.0); }
Now, all the texels I get with texture() have value (0, 0, 0, 1).
The very same code works when I switch application to use discrete nVidia card.
What would be a problem here?
My system configuration:
System Used: Lenovo W530
CPU: i7-3740QM
GPU: HD Graphics 4000
Graphics Driver Version: 9.17.10.2843 (the newest available for the laptop)
Operating System: Windows 8
Occurs on non-Intel GPUs?: No
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Jacek,
I am talking with the OpenGL team about this. Sorry for taking so to look at this.
-Michael
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Jacek,
Are you still this issue with the latest driver?
-Michael
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Michael,
Thanks for responses, first of all.
Unfortunately on my laptop (Lenovo W530) I cannot install drivers from Intel directly, but have to use ones provided by vendor and those are pretty old.
Do you imply the latest driver version fixes this issue? If so, this would mean there's nothing I can do on my side but advice user to install the newest drivers.
Regards,
Jacek
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Jacek,
We are not sure if the latest driver fixes the issue, checking to see if you have tried already.
We can't reproduce the with the code sample provided, do you have a sample we can use to reproduce the issue?
-Michael
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Michael,
It's been a while since I looked in here and that's because this issue was postponed in the project and I had no time to prepare code to reproduce it.
Nevertheless if you'd still be so kind to take a look, here it is:
main.cpp - http://pastebin.com/bW9H9icU
vertex.glsl - http://pastebin.com/GyM2rLDj
fragment.glsl - http://pastebin.com/yX8KvsQf
fragment_alt.glsl - http://pastebin.com/JussZjNq
Application uses GLFW and GLEW libraries.
The main concern is line main.cpp:378 where texture is loaded. If you uncomment line main.cpp:17 texture is loaded in an alternative way (line main.cpp:385) which works but is much slower.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
There should be post with code sample before the one with a texture.
Nevertheless, a friend has found a work-around for this problem. It turns out when min and mag filters are set to nearest instead of linear it works all right. Speaking in code, this:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
was changed to this:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
As for why, I yet have to find out.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Jacek,
Thanks I will get this filed and into the right hands. The driver you mentioned in the first post, is that still the latest driver you have seen this on? It is something I will be asked by the driver team.
-Michael
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Right, good you're asking, because since first post I changed my system completely. Now it is:
System Used: Lenovo P50
CPU: i7 6700HQ
GPU: HD Graphics 530
Graphics Driver Version: 20.19.15.4326 (from Lenovo; cannot install drivers directly from Intel)
Operating System: Windows 10 64-bit
Occurs on non-Intel GPUs?: No
The symptoms are the same as before, though.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Jacek,
I heard back from the driver team on this issue. According to the OpenGL Spec:
NEAREST and TEXTURE_MIN_FILTER must be NEAREST or
NEAREST_MIPMAP_NEAREST
RGBA(0, 0, 0, 1) is sampled in shader according to spec.
So per the spec the Intel driver is responding correctly. That leaves the question why it works on Nvidia, that I can't say other then it is something they are doing in their driver. From our side since we are compliant to the OpenGL spec the driver team feels no further action is needed.
Hope this helps
-Michael
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page