Media (Intel® oneAPI Video Processing Library, Intel Media SDK)
Access community support with transcoding, decoding, and encoding in applications using media tools from Intel. This includes Intel® oneAPI Video Processing Library and Intel® Media SDK.

sample_decode failing with DX11


Hello, I am trying to run the sample_decode example in the Intel Media SDK 2013 R2. It works well when I don't specify the direct 3d version, where it defaults to d3d9, but when I specify -d3d11, it fails to initialize. The error happens in CD3D11Device::Init(), after D3D11CreateDevice returns successfully. Then there is the following code:

m_pDXGIDev = m_pD3D11Device;
m_pDX11VideoDevice = m_pD3D11Device;
m_pVideoContext = m_pD3D11Ctx;


Here m_pDX11VideoDevice is a null pointer even after assignment to m_pD3D11Device (which isn't null), and the second MSDK_CHECK_POINTER fails.

My command line is sample_decode -r -i "myinputfile.h264" -d3d11. The computer is a Dell XPS 8500 with an i7 3770 CPU; the monitor is plugged into the integrated GPU with a VGA cable. There is also a discrete NVIDIA card but nothing is plugged into it right now.

0 Kudos
14 Replies


I suspect the NVIDIA card is being found instead of the Intel device.  D3D11 does not require a monitor to be attached to initialize and use a GPU.  Can you try turnning the <install_dir>\tools\mediasdk_sys_analyzer\win32\sys_analyzer.exe tool and provide a report?


New Contributor I

Do the D3D11 surfaces work on Windows 7, or just Windows 8 ?



Here is the output of sys_analyzer.exe. I can see that the GT 640 is indeed listed as active which is strange since there's nothing connected to it. I'm not familiar with DXGI or D3D11 for that matter, is there a way to tell it to use a particular graphics device?

I just tried plugging the monitor into the nvidia card instead and the behavior is still exactly the same, m_pDX11VideoDevice remains null after assignment to m_pD3D11Device, which is non-null.

EDIT: I also just tried removing the NVIDIA card from the PC and plugging into the integrated graphics. No luck there, same behavior. This is really strange.

EDIT2: I uninstalled everything, the NVIDIA driver, the Intel HD driver, the Intel Media SDK, restarted, reinstalled the driver and Media SDK and same error. Also with my test video now I'm getting only 120fps instead of 400. :S

Intel Media SDK System Analyzer (64 bit)

The following versions of Media SDK API are supported by platform/driver:

Version Target Supported Dec Enc
1.0 HW Yes X X
1.0 SW Yes X X
1.1 HW Yes X X
1.1 SW Yes X X
1.3 HW Yes X X
1.3 SW Yes X X
1.4 HW Yes X X
1.4 SW Yes X X
1.5 HW Yes X X
1.5 SW Yes X X
1.6 HW Yes X X
1.6 SW Yes X X
1.7 HW No
1.7 SW Yes X X

Graphics Devices:
Name Version State
Intel(R) HD Graphics 4000 08
NVIDIA GeForce GT 640 Active

System info:
CPU: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz
OS: Microsoft Windows 7 Home Premium
Arch: 64-bit

Installed Media SDK packages (be patient...processing takes some time):
Intel® Media SDK 2013 (x64)
Intel(R) Media SDK 2012 R3 (x64)
Intel® Media SDK 2013 R2 (x64)

Installed Media SDK DirectShow filters:
Intel® Media SDK MP3 Decoder :
C:\Program Files\Intel\Media SDK 2013 R2\samples\_bin\x64\imc_mpa_dec_ds.dll
Intel® Media SDK JPEG Decoder :
C:\Program Files\Intel\Media SDK 2013 R2\samples\_bin\x64\jpeg_dec_filter.dll
Intel® Media SDK MPEG-2 Splitter :
C:\Program Files\Intel\Media SDK 2013 R2\samples\_bin\x64\imc_mp2_spl_ds.dll
Intel® Media SDK H.264 Encoder :
C:\Program Files\Intel\Media SDK 2013 R2\samples\_bin\x64\h264_enc_filter.dll
Intel® Media SDK MVC Decoder :
C:\Program Files\Intel\Media SDK 2013 R2\samples\_bin\x64\mvc_dec_filter.dll
Intel® Media SDK AAC Decoder :
C:\Program Files\Intel\Media SDK 2013 R2\samples\_bin\x64\imc_aac_dec_ds.dll
Intel® Media SDK MPEG-2 Decoder :
C:\Program Files\Intel\Media SDK 2013 R2\samples\_bin\x64\mpeg2_dec_filter.dll
Intel® Media SDK MP4 Splitter :
C:\Program Files\Intel\Media SDK 2013 R2\samples\_bin\x64\imc_mp4_spl_ds.dll
Intel® Media SDK MPEG-2 Muxer :
C:\Program Files\Intel\Media SDK 2013 R2\samples\_bin\x64\imc_mp2_mux_ds.dll
Intel® Media SDK MP4 Muxer :
C:\Program Files\Intel\Media SDK 2013 R2\samples\_bin\x64\imc_mp4_mux_ds.dll
Intel® Media SDK H.264 Decoder :
C:\Program Files\Intel\Media SDK 2013 R2\samples\_bin\x64\h264_dec_filter.dll
Intel® Media SDK MP3 Encoder :
C:\Program Files\Intel\Media SDK 2013 R2\samples\_bin\x64\imc_mpa_enc_ds.dll
Intel® Media SDK AAC Encoder :
C:\Program Files\Intel\Media SDK 2013 R2\samples\_bin\x64\imc_aac_enc_ds.dll
Intel® Media SDK MPEG-2 Encoder :
C:\Program Files\Intel\Media SDK 2013 R2\samples\_bin\x64\mpeg2_enc_filter.dll
Intel® Media SDK VC-1 Decoder :
C:\Program Files\Intel\Media SDK 2013 R2\samples\_bin\x64\vc1_dec_filter.dll

Installed Intel Media Foundation Transforms:
Intel® Hardware VC-1 Decoder MFT : {059A5BAE-5D7A-4C5E-8F7A-BFD57D1D6AAA}
Intel® Hardware H.264 Decoder MFT : {45E5CE07-5AC7-4509-94E9-62DB27CF8F96}
Intel® Hardware MPEG-2 Decoder MFT : {CD5BA7FF-9071-40E9-A462-8DC5152B1776}
Intel® Quick Sync Video H.264 Encoder MFT : {4BE8D3C0-0515-4A37-AD55-E4BAE19AF471}
Intel® Hardware Preprocessing MFT : {EE69B504-1CBF-4EA6-8137-BB10F806B014}



Sorry for the confusion.  The use of MediaSDK with Direct3D11 requires Direct3D 11.1, which is only available when running Windows 8.

Thank you for pointing out the issue with our sample when run on Windows 7 with Direct3D 11.0 drivers.


Since D3D11 is required to use hardware accelerated decoding on a different video adapter, does that mean that Windows 7 users can only get HW support if plugged into the integrated GPU adapter?


D3D11 is required to use acceleration hardware on an adapter that does not have 1) a display attached and 2) a user logged in.  This is because the Microsoft D3D9 archtiture requires a displayable render target at the time an object is created.

With D3D11.1, Microsoft has introduced full support for true "headless" accelaration.

You can use D3D9 acceleration on any adapter if the OS believes there is a monitor attached to it (For example, when 2 display monitors are attached to the system),  There are a few ways to 'trick' the OS to think a monitor is attached (see


Thank you for the explanations, but this is still not very clear to me. Let's say a user has a discrete GPU and two monitors, one plugged to the iGPU and one to the dGPU. Using D3D9, is QuickSync HW acceleration available on both GPUs? Now let's say he's trying to display video on the monitor that's on the dGPU. Is there a heavy performance hit related to transferring the Direct3D surface from the iGPU to the dGPU, if that's supported at all? Would the performance hit be less using Direct3D 11.1?

Basically I'm trying to understand what is the recommended hardware setup for decoding video with HW acceleration, using D3D9. Unfortunately D3D11.1 is out of the question for the time being.



For D3D9, acceleration is only available on Intel device with an active display.  The Intel device can be a secondary device (and NVIDIA Primary), but the OS still requires it to have a display to used D3D9.  The Intel device can be a 'secondary' device in multi-mon configuration, but D3D9 requires a display mode to be set in order to be used for acceleration.  This is a Microsoft architectural requirement.


Ok. I supposed transferring a d3d surface from one graphics card to the other must be relatively expensive. Is there a way to use QuickSync HW accelerated decoding into system memory directly, or does it only work with d3d surfaces? I'm aware that the software implementation uses system memory, what I'm wondering is whether it's possible to use HW acceleration while still using system memory as the destination for decoded frames. This would be faster than rendering to d3d surfaces and then copying back to system memory. 

New Contributor I

Quick Sync can definitely use hardware encoding from system memory. It is a little slower though.

I do not yet have benchmarks for decoding, so, I apologize I cannot offer you those kinds of numbers.

I have run through quite a bit of different encoding scenarios on different Quick Sync enabled CPUs, and here is what I have found. I hope some of this will help you in your quest to better understand the decoding performance.

If you look at the top speeds of my development system, I get 156 FPS @1920x1080 with D3D/Hardware, but I only get 143 FPS with SystemMem/Hardware.

So, I get about 91% of the Direct3d speed when I use system memory.

Note, these speeds are with Async=4, and empty surfaces NV12, real video tends to be about 10-15% slower again, so 143*[.9 .85]=[128.7 121.55]

So, on my system using real video, with system memory, I could expect about 121-128 FPS without other frame input limitations.

Changing from NV12 to RGBA, or other parameters like TargetUsage from BALANCED to BEST_QUALITY will drop your speeds, also.

[changing TargetUsage from BALANCED to BEST_SPEED will make your encoding run faster, though!]

I give away the benchmark tool for free:


Thanks, camkego, I tried your tool but it seems to stay stuck at "3% complete - 75 seconds remaining". CPU usage is 0. Should I just wait an hour or something?

New Contributor I


Hmm, nope, that is a bug!, the whole deal should take less than 2 minutes to run!  I do not know what is happening, maybe it is an issue related to the 2nd video card, I'm not sure. 

I am sorry it isn't working for you, it has worked on dozens of systems.

 I will spend some time thinking about what the issue might be.

I see you have a 3770, I did find some 3770 benchmarks for you.


Thanks again. I ended up doing some benchmarks myself using the sample_decode application and tweaking the code a bit. It looks like decoding to system memory is about 4x faster using the hardware implementation than software; based on your results I would expect it to be only marginally faster if decoding to video memory.


I have a new question about it which I posted as a new topic, however it isn't appearing in the list of topics... In any case here is the link, please advise: