In our application use case, we require to composite multiple video stream and render to the display.
Out pipeline for composition will be something like below:
|------> VPP Compositor --> Renderer
Note: Input streams can be different FPS (i.e. stream1 is of 25 FPS and stream2 is of 30 FPS). What I am expecting is that in this case compositor will be composing all the decoded frames at constant rate which is highest of the frame rate of all decoded streams.
All media session are running in parallel threads.
Now in this case, there will a situation where VPP Compositor request decoded frame from each of the decoder session. The decoder which is running at lower FPS will not be able to provided decoded surface at that point. I want my compositor to don't wait for slowest decoder and keep doing its task of composition at constant rate.
For this I need to provided a dummy or we can say transparent frame to VPP compositor in place of slowest decoder's surface.
Question: How can I generate a transparent frame which can be passed to VPP compositor?
Could you please provide us a reproducer of the composite pipeline that you have built so we could reproduce this scenario from our end. Also please let us know which operating system you are using.
I am using Intel Atom 3940 ported with Yocto Embedded Linux.
Should I provide you with my sample application or architecture overview?
Thanks for the prompt response.
I have attached my sample application for your reference.
- We have taken reference for our sample application from Sample_multi_trasncode provided with Intel Media SDK.
- Sample application will decode two h264 streams (1. 704x576 at 25FPS and 2. 2592x1952 at 30 FPS). Decoded streams will be used for composition in 2x2 window layout.
- There are 3 Media SDK session present here. 1st in for VPP Composition , 2nd and 3rd for Decoding. Vpp session is used at parent session and all decoder session is joined with it.
- All Media SDK session are running in independent threads.
- For sharing surfaces between decoder and composition session, linked list of surface buffers is maintained. Each decoder will decode one frame, put the decoded frame in it's surface buffer and give asynchronous signal to compositor session to read the frame.
Generating a dummy / transparent frame to compositor would be a complicated approach. A recommended solution in this case would be to use frame rate conversion option in media sdk to convert all streams which are input to compositor to have the same FPS. Please find more information about frame rate conversion in the below link.