Media (Intel® Video Processing Library, Intel Media SDK)
Access community support with transcoding, decoding, and encoding in applications using media tools like Intel® oneAPI Video Processing Library and Intel® Media SDK
Announcements
The Intel Media SDK project is no longer active. For continued support and access to new features, Intel Media SDK users are encouraged to read the transition guide on upgrading from Intel® Media SDK to Intel® Video Processing Library (VPL), and to move to VPL as soon as possible.
For more information, see the VPL website.

DecodeFrameAsync surface_work parameter in internal buffer allocation mode

dr_asik
Beginner
535 Views

I am trying to understand how memory allocation works with the MSDK. Let's say we are just decoding and using system memory for in/out. The documentation states:

If an application needs to control the allocation of video frames, it can use callback functions through the mfxFrameAllocator interface. If an application does not specify an allocator, an internal allocator is used.

We still need an allocator for I/O surfaces, but the decoder should not need us to provide it with any buffers. So why then does DecodeFrameAsync always take a surface_work parameter? I tried passing null but it fails with MFX_ERR_NULL_PTR. This is not an output parameter, the surface_out parameter is.

mfxStatus MFXVideoDECODE_DecodeFrameAsync(mfxSession session, mfxBitstream *bs, mfxFrameSurface1 *surface_work, mfxFrameSurface1 **surface_out, mfxSyncPoint *syncp);
0 Kudos
3 Replies
Sravanthi_K_Intel
535 Views

The DecodeAsync function (and other Async functions) need a surface to operate on, be it system memory, video or opaque. The former two are controlled by the developer, while the latter is managed by the SDK itself. You can see the tutorial (simple_5_transcode_async to get yourself started with opaque surfaces).

You can download the tutorials from here: https://software.intel.com/en-us/intel-media-server-studio-support/training

0 Kudos
dr_asik
Beginner
535 Views

Ah, I think I figured it out. In external allocation mode, the "surface_work" parameter is not actually used for decoding but it's simply locked and assigned to the "surface_out" parameter when the decoder wants to output a frame. When the decoder has no frame to output, this input frame is simply unused. In internal allocator mode however, the frame is actually used for caching decoding results and may get locked even when the decoder has nothing to output.

Is this correct?

0 Kudos
Sravanthi_K_Intel
535 Views

Yes, your understanding is correct. Hope that gets you rolling.

0 Kudos
Reply