i'm working on a project need to do 1 to N transcoding for hls ABR stream, transcoding the 1080i source stream to three output stream: 1280p+720p+480p. The pipeline would be:
And i'm trying to use the opaque system, but in my test it not worked, could not be simple shared by other pipeline as sharing system memery .
How can i share the output surface of DecodeFrameAsync between three vpp_enc pipeline? is there any code sample i can use for reference?
Can you please let us know your hardware you are planning to deploy this pipeline on? In general, you should be able to keep the entire pipeline sharing video memory instead of opaque memory. The pipeline you have indicated above can be set using sample_multi_transcode where you can set 1:N or N:N transcode pipeline. sample_multi_transcode is part of sample package, which can be downloaded from here.
sorry for the delay.
I'm planning to deplay this pipeline using MSS version 2015R6 on centos 7.1 with cpus Intel(R) Core(TM) i7-5650U.
I had worked out a simple pipeline which integrated with ffmpeg for demuxin/muxing job then decode/vpp/encode the input frames using media sdk. it works well on. I was reference to mediasdk-tutorials-0.0.3 for simple transcode pipeline, and using iopattern opaque memery.
But failed when i created a second session for 1:N pipeline,
session_1: input_bitstream --> decode --> vpp_enc --> output_1_bitstream
session_2: output frameSurface of session_1_decode --> vpp_enc --> output_2_bitstream.
I looked into the sample_multi_transcode, found it was so complexed and i can't figure out how to do this pipeline.
We have an internal application which is a simplified version of 1:N pipeline, let me check if we can share that with you. In the meantime, can you provide the exact problem where you see failure with reproducer we can try to help you fix some of the bottlenecks after looking into your application.
Thanks for you replay.
I was just doing a test, created two joined session, session 1 for decode-vpp-encode, session 2 for vpp-enc. pass the mfxFrameSurface1 returned by function DecodeFrameAsync to RunFrameVPPAsync(session 1)-EncodeFrameAsync(session 1)-SyncOperation(session 1), then pass the same mfxFrameSurface1 t0 RunFrameVPPAsync(session 2)-EncodeFrameAsync(session 2)-SyncOperation(session 2), not consider the transcode performance yet. the problem is the output of session 2 was blank.
I'm working on the multi_transcode_sample now, hopefully i can figure it out.
I would appreciate that If there are a simplified version of 1:N pipeline code sample.