I am using the QuickSync Hardware H264 decoder for a media applications. Currently I am getting the correct output buffers from pmfxOutSurface->Data but the Timestamp information in this structure is not monotonically increasing order . Please somebody could explain what should be Timestamp order output form Quicksync H.264 decoder ? Is there any way to get the Timestamp in monotonically increasing order (or the display order) ?
I am currently populating TimeStamp in mfxBitStream structure with the presentation timestamp on the input side( order the bitstream is encoded) .Is this the correct value I am populating to the Timestamp ?
Should I also DecodeTimeStamp variable in mfxBitStream structure ? If yes, what should I populate it with ?
Thanks Asik ,
This problem was occuring for I P B B bitstream pattern because in certain cases QuickSync H264 Decoder may consume Number of Bytes as zero. So to handle that case we have to preserve the timestamp of that call until that frame is consumed.
Could you inform what is the performance of QuickSync Hardware decoder for H.264 streams and is there any frame caching inside the codec for the generated frame ?
What is the function of Async Depth ?
If anybody is aware of these problems or solutions, please reply as soon as possible.
I don't have a lot of info on hardware internals, but from what is available at https://01.org/linuxgraphics/documentation and my own experiments with simple I-frame only streams it doesn't look like there is caching. With simple decode pipelines with a sync immediately after decode you can assign a timestamp to the bitstream and expect it to be in the decoded frame. (However, the story may not be so simple with more complex asynchronous pipelines with multiple stages.)
BTW, the general purpose of the asyncdepth setting is to indicate the number of frames which can be processed by that stage without requiring explicit synchronization. This allows opportunities for scheduling efficiency, especially in pipelines with multiple stages. For a simple plugin where you intend to process 1 frame at a time asyncdepth=1 is probably the right choice. However, separating decode and encode into separate filters will not be as efficient as combining operations.
I am working supporting Field decoding in Intel Media SDK Quicksync H264 application . The problem is that I get data as field by by field on input while Quick Sync returns one frame worth of data as output so timestamp tend to go ahead in case of Quciksync doesn't consume anything . As field worth of data needs to be fed again with Timestamp preserved .So if before MFXVideoDECODE_DecodeFrameAsync() , there is any way to know that input is interlaced or progressive or nay value that could distinguish a Frame and a field.
This information would help me work put a solution without this information generic cases may solved but some rare cases may remain unresolved.
I am also need to software decoding support in Intel Media SDK do we need a separate library?
If not what different I have to do compared to Hardware Decoding Support of QuickSync to support software decoding as well ?
Please reply if anybody have any idea please reply.
The Windows versions of Media SDK include a software library, the Linux version does not. Where the software implementation exists you can choose it with MFX_IMPL_SOFTWARE at session initialization. The interface is the same. For Linux you will need to set up a different code pathway, though you could implement this as your own Media SDK user plugin if consistent interface is a priority.