I am trying to understand how memory allocation works with the MSDK. Let's say we are just decoding and using system memory for in/out. The documentation states:
If an application needs to control the allocation of video frames, it can use callback functions through the mfxFrameAllocator interface. If an application does not specify an allocator, an internal allocator is used.
We still need an allocator for I/O surfaces, but the decoder should not need us to provide it with any buffers. So why then does DecodeFrameAsync always take a surface_work parameter? I tried passing null but it fails with MFX_ERR_NULL_PTR. This is not an output parameter, the surface_out parameter is.
mfxStatus MFXVideoDECODE_DecodeFrameAsync(mfxSession session, mfxBitstream *bs, mfxFrameSurface1 *surface_work, mfxFrameSurface1 **surface_out, mfxSyncPoint *syncp);
The DecodeAsync function (and other Async functions) need a surface to operate on, be it system memory, video or opaque. The former two are controlled by the developer, while the latter is managed by the SDK itself. You can see the tutorial (simple_5_transcode_async to get yourself started with opaque surfaces).
You can download the tutorials from here: https://software.intel.com/en-us/intel-media-server-studio-support/training
Ah, I think I figured it out. In external allocation mode, the "surface_work" parameter is not actually used for decoding but it's simply locked and assigned to the "surface_out" parameter when the decoder wants to output a frame. When the decoder has no frame to output, this input frame is simply unused. In internal allocator mode however, the frame is actually used for caching decoding results and may get locked even when the decoder has nothing to output.
Is this correct?