Developing Games on Intel Graphics
If you are gaming on graphics integrated in your Intel Processor, this is the place for you! Find answers to your questions or post your issues with PC games
493 Discussions

Loading texture to integer internal format results in all zeroes

Jacek_T_2
Beginner
1,853 Views

Hi all,

I'm using OpenGL to convert video frames from 10-bit YUV420p to 8-bit RGB. YUV frame data is loaded as a texture with:

glTexImage2D(GL_TEXTURE_2D, 0, GL_R16UI, m_frameWidth, m_frameHeight + m_frameHeight / 2, 0, GL_RED_INTEGER, GL_UNSIGNED_SHORT, videoFrame.data()); 

In the fragment shader it's accessed with:

#version 130

// irrelevant variables definitions here 

uniform usampler2D frameTex; 

void main() 
{ 
	// component value is saved on 10 least significant bits, 
	// so to normalize it divide by maximum value that can be coded on 10 bits (2^10 - 1 = 1023) 
	float Y = float(texture(frameTex, vec2(gl_TexCoord[0].s, gl_TexCoord[0].t * YHeight)).r) / 1023.0; 
	float U = float(texture(frameTex, vec2(gl_TexCoord[0].s / 2, UOffset + gl_TexCoord[0].t * UHeight)).r) / 1023.0; 
	float V = float(texture(frameTex, vec2(gl_TexCoord[0].s / 2, VOffset + gl_TexCoord[0].t * VHeight)).r) / 1023.0; 

	gl_FragColor = vec4(HDTV * vec3(Y, U, V), 1.0); 
}

Now, all the texels I get with texture() have value (0, 0, 0, 1).

The very same code works when I switch application to use discrete nVidia card.

What would be a problem here?

My system configuration:
System Used: Lenovo W530
CPU: i7-3740QM
GPU: HD Graphics 4000
Graphics Driver Version: 9.17.10.2843 (the newest available for the laptop)
Operating System: Windows 8
Occurs on non-Intel GPUs?: No

0 Kudos
12 Replies
Michael_C_Intel2
Employee
1,853 Views

Hi Jacek,

I am talking with the OpenGL team about this. Sorry for taking so to look at this.

-Michael
 

0 Kudos
Michael_C_Intel2
Employee
1,853 Views

Hi Jacek,

Are you still this issue with the latest driver?

-Michael

0 Kudos
Jacek_T_2
Beginner
1,853 Views

Hi Michael,

Thanks for responses, first of all.

Unfortunately on my laptop (Lenovo W530) I cannot install drivers from Intel directly, but have to use ones provided by vendor and those are pretty old.

Do you imply the latest driver version fixes this issue? If so, this would mean there's nothing I can do on my side but advice user to install the newest drivers.

Regards,
Jacek

0 Kudos
Michael_C_Intel2
Employee
1,853 Views

Hi Jacek,

We are not sure if the latest driver fixes the issue, checking to see if you have tried already.

We can't reproduce the with the code sample provided,  do you have a sample we can use to reproduce the issue?

-Michael

0 Kudos
scott_c_
Beginner
1,853 Views

I Think you should check this code for once, you will surely going to get the solution.

0 Kudos
Jacek_T_
Beginner
1,853 Views

Hello Michael,

It's been a while since I looked in here and that's because this issue was postponed in the project and I had no time to prepare code to reproduce it.

Nevertheless if you'd still be so kind to take a look, here it is:
main.cpp - http://pastebin.com/bW9H9icU
vertex.glsl - http://pastebin.com/GyM2rLDj
fragment.glsl - http://pastebin.com/yX8KvsQf
fragment_alt.glsl - http://pastebin.com/JussZjNq

Application uses GLFW and GLEW libraries.

The main concern is line main.cpp:378 where texture is loaded. If you uncomment line main.cpp:17 texture is loaded in an alternative way (line main.cpp:385) which works but is much slower.

0 Kudos
Jacek_T_
Beginner
1,853 Views

The texture file can be found here:
https://dolby.box.com/s/2mpzg587gf3vvz1ky9iwfd7acd2ur7ry

0 Kudos
Jacek_T_
Beginner
1,853 Views

There should be post with code sample before the one with a texture.

Nevertheless, a friend has found a work-around for this problem. It turns out when min and mag filters are set to nearest instead of linear it works all right. Speaking in code, this:

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

was changed to this:

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);

As for why, I yet have to find out.

0 Kudos
Michael_C_Intel2
Employee
1,853 Views

Hi Jacek,

Thanks I will get this filed and into the right hands. The driver you mentioned in the first post, is that still the latest driver you have seen this on? It is something I will be asked by the driver team.

-Michael 

 

 

0 Kudos
Jacek_T_
Beginner
1,853 Views

Right, good you're asking, because since first post I changed my system completely. Now it is:
System Used: Lenovo P50
CPU: i7 6700HQ
GPU: HD Graphics 530
Graphics Driver Version: 20.19.15.4326 (from Lenovo; cannot install drivers directly from Intel)
Operating System: Windows 10 64-bit
Occurs on non-Intel GPUs?: No

The symptoms are the same as before, though.

0 Kudos
Michael_C_Intel2
Employee
1,853 Views

Hi Jacek,

I heard back from the driver team on this issue. According to the OpenGL Spec:

 

If the internalformat is integer, TEXTURE_MAG_FILTER must be

    NEAREST and TEXTURE_MIN_FILTER must be NEAREST or

    NEAREST_MIPMAP_NEAREST

 

So when linear filter is set to texture, the texture is incomplete. And at this time

RGBA(0, 0, 0, 1) is sampled in shader according to spec.

So per the spec the Intel driver is responding correctly. That leaves the question why it works on Nvidia, that I can't say other then it is something they are doing in their driver. From our side since we are compliant to the OpenGL spec the driver team feels no further action is needed. 

Hope this helps

-Michael 

0 Kudos
Alex_G_3
Beginner
1,853 Views

I Think you should check this code for once you will surely going to get the solution.

0 Kudos
Reply