I have an NV12 D3D11 texture array from another source, and would like to encode them in hardware with libmfx. I have existing code supporting encoding single NV12 D3D9 surfaces in the same way, and I'm trying to adapt it to D3D11.
In the program I already have a D3D11 device (where the textures were originally created) and I use that to create the encoder session with an MFX_HANDLE_D3D11_DEVICE. Then, following the d3d11_allocator.cpp example, I set a frame allocator with a GetHDL implementation which will resolve the mids to an mfxHDLPair with first set to the texture pointer and second set to the array subresource index. Everything else is the same as the working D3D9 setup.
This kindof works and encodes, but the output isn't correct. The output stream has the expected framerate and parameters, but it appears to have always encoded the texture at array index zero - the output only contains some of the expected frames, and the framerate looks very low. If I make the array larger and never write to the index zero texture then the output stream is entirely green (i.e. YUV zero).
So, is this use-case actually supported? Is there some additional option I need to set somewhere to make the encoder see the subresource index?
(System is a 4500U running current Windows 10 and Media SDK 2017R2.)