Media (Intel® oneAPI Video Processing Library, Intel Media SDK)
Access community support with transcoding, decoding, and encoding in applications using media tools like Intel® oneAPI Video Processing Library and Intel® Media SDK

MSS Video Frames Allocator questions



I'm working on integration of MediaSDK (CentOS 7, 2015R3 release) into my project right now and I have several questions related to frame allocators:

- Allocator's methods "lock", "unlock" and "gethdl" - when exactly is it called by the decoder and the encoder (or VPP)? What is the exact logical purpose of each of those methods? I didn't find any exact descriptions in the documentation, only examples.

- In tutorials that demonstrate working with video memory (for example transcoding_vmem) I saw that the encoder (internally, when initializing it) calls the allocator function and tries to allocate frames. Why is that? I was sure that all required frames are allocated by the user prior to initializing decoder and encoder (its number is calculated as number of requested frames by the decoder + number of requested frames by the encoder).

- In tutorial that demonstrate opaque surfaces - there is no external allocator used at all. How video surfaces (in video memory) are allocated in this case?

- In my product I have unpredictable number of decoders, encoders, video mixers (VPP composition), resizers, etc. Data flow of the surfaces between all those "building blocks" is defined by the user and can dynamically change on the fly. I thought that it would be efficient to create separate session for each such "building block", but to join all those sessions. Is there any performance penalty in this solution?

- Regarding the frames allocator in this project: instead of having surface pool for each pair of "decoder + something" I thought about implementing a global pool of surfaces and when I need a free surface I fetch any surface with required resolution from this pool. I mean several decoders with the same resolution can share the surface pool (of course all the sessions are joined in this case). Do you think this solution is correct and efficient?

BTW, if you already have such a material that describes these things - I'll be glad to get links to it. I couldn't find it...

Thanks in advance,
Oleg Fomenko

0 Kudos
1 Reply

Hi Oleg,

Thanks for your questions. A lot of our documentation on our SDK can be found in the manuals distributed with the installation - in doc/ folder. You can also find them in the developer's guide.

The tutorials are just basic starting points for developers to get started and understand Media SDK. They are not optimizes or tuned for performance. For such cases, I recommend you look at our samples. They are more optimized and the sample_multi_transcode also shows multiple sessions scenario. They should help you get started on your use-case.

For more detailed questions, we can discuss in the PM. Also, are you guys using the MSS Essentials or Pro version? 

0 Kudos