Media (Intel® Video Processing Library, Intel Media SDK)
Access community support with transcoding, decoding, and encoding in applications using media tools like Intel® oneAPI Video Processing Library and Intel® Media SDK
Announcements
The Intel Media SDK project is no longer active. For continued support and access to new features, Intel Media SDK users are encouraged to read the transition guide on upgrading from Intel® Media SDK to Intel® Video Processing Library (VPL), and to move to VPL as soon as possible.
For more information, see the VPL website.

JoinSession with active tasks

BMart1
New Contributor II
838 Views

Our app transcodes a never ending playlist of videos into a single seamless realtime transport stream. We use separate sessions for the encoding and decoding. The encoding session never ends, but we create a new sesion for each decoder. To optimize performance, we were looking into joining the sessions, but JoinSession may fails with MFX_WRN_IN_EXECUTION when "Active tasks are executing or in queue in one of the sessions. Call this function again after all tasks are completed.".

Should we create n decoding sessions upfront along with the decoder and recycle them? Empty the encoding pipeline (YUCK)?

Bruno

0 Kudos
1 Solution
Sravanthi_K_Intel
838 Views

Hi Bruno,

If I am understanding your pipeline correctly, here is what you are trying to achieve:

Either this is what you are looking for:

You will have multiple streams arriving on the fly that will be decoded, and then encode each individual stream after its decoded. I assume you are working with H264 encoding. Some notes:

- The Media SDK decoder is much much faster than the encoder, so you can technically decode N streams in parallel while encoding M (where N >> M).

- If you launch multiple sessions each for one decoder session, you are going to be limited by the rate of encoding. (Not to mention, when so many decode session are running in parallel, you will produce a lot of raw data which is going to sit around until it is consumed by the encoder. You will have to manage the memory requirement). So no matter your N, your throughput will be M (with the requirement of lot of memory).

- If this is your pipeline, then you can spawn N decode sessions, and recycle them when the stream being decoded is complete. The sync calls and end of stream will be your indicators for when one stream decoding has ended, and you can enqueue another stream for being decoded.

Or this is what you want:

You will have an N:N pipeline, where you spawn N sessions and each has a decode->encode pipeline. And each session can transcode one individual stream until all your sessions are running full steam. When one of the transcodes is complete (again, using Sync points and end of stream indicators), you can recycle the session and enqueue the next stream. This will have a throughput of N streams and will be able to maintain it. (Again the throughput = encode capacity of the system, since decode speed >> encode speed).

Hope this helps. If I missed your question, please let me know. A simple pipeline diagram could be extremely helpful.

View solution in original post

0 Kudos
5 Replies
Surbhi_M_Intel
Employee
838 Views

Hi Bruno, 

There is a sample(sample_multi_transcode) which is present in samples package which showcase similar pipeline. It is configured to joinin multiple sessions of transcoding so to execute them in parallel using MFXJoinSession. Samples shows two use cases of doing N:N transcode and 1:N transcode pipeline. Attached is the read me for the sample(for a quick read). You can download the samples from here
Also check pg 10(topic-multiple sessions) in the Media SDK manual, which explains how to configure multiple sessions. 

-Surbhi
 

0 Kudos
BMart1
New Contributor II
838 Views

Hi Surbhi,

Thanks for the links. I had replied to you already but the post was lost because I wasn't logged in :(

I studied the source code for the multi transcode sample. All sessions are created and joined before any is run. We can't do that because we would need infinite sessions.

Now we have a decode session running, a few other decode sessions with some samples decoded and ready to be consumed when the current decode session ends, and an encode session that consumes the samples from all decode sessions, one after another.

Thanks,
Bruno

0 Kudos
Sravanthi_K_Intel
839 Views

Hi Bruno,

If I am understanding your pipeline correctly, here is what you are trying to achieve:

Either this is what you are looking for:

You will have multiple streams arriving on the fly that will be decoded, and then encode each individual stream after its decoded. I assume you are working with H264 encoding. Some notes:

- The Media SDK decoder is much much faster than the encoder, so you can technically decode N streams in parallel while encoding M (where N >> M).

- If you launch multiple sessions each for one decoder session, you are going to be limited by the rate of encoding. (Not to mention, when so many decode session are running in parallel, you will produce a lot of raw data which is going to sit around until it is consumed by the encoder. You will have to manage the memory requirement). So no matter your N, your throughput will be M (with the requirement of lot of memory).

- If this is your pipeline, then you can spawn N decode sessions, and recycle them when the stream being decoded is complete. The sync calls and end of stream will be your indicators for when one stream decoding has ended, and you can enqueue another stream for being decoded.

Or this is what you want:

You will have an N:N pipeline, where you spawn N sessions and each has a decode->encode pipeline. And each session can transcode one individual stream until all your sessions are running full steam. When one of the transcodes is complete (again, using Sync points and end of stream indicators), you can recycle the session and enqueue the next stream. This will have a throughput of N streams and will be able to maintain it. (Again the throughput = encode capacity of the system, since decode speed >> encode speed).

Hope this helps. If I missed your question, please let me know. A simple pipeline diagram could be extremely helpful.

0 Kudos
BMart1
New Contributor II
838 Views

Hi,

We'll have more encoders than decoders at a time, but more decoders than encoders over the lifetime of the app.  For example, a 720x480 video may be decoded, scaled to 1920x1080 and 1280x720, and finally encoded to 10 Mbps and 5 Mbps respectively. When the 720x480 video ends, we'll pick the next video, this time 1920x1080, and continue to output 1920x1080@10Mbps and 1280x720@5Mbps. If you look at the program in the middle of the 720x480 video, the app will have one decoders, two vpp scalers and two encoders.  In the middle of the 1920x1080 input video, the app will have on decoder, a single scaler and the same two encoders as before. Keeping the same encoder instances when you switch input video is crucial for continuity of the generated stream.

If I understand correctly, you propose to create the two encoders and a couple decoders and vpp sessions upfront and when an input video ends, reconfigure the existing session for future input video.  Right?

Bruno

0 Kudos
Sravanthi_K_Intel
838 Views

The way I look at your pipeline, you have an input stream that will be transcoded to multiple bitrates without resize (no VPP stage), and in parallel be transcoded to multiple bitrates with resize (VPP stage). 

So, you can create an N:N multi_transcode scenario, where the N=4 sessions are (2 for decode-encode to different bitrates, 2 for dec-vpp-encode). And when any session is complete (signalling end of stream), you can enqueue a different stream and keep it going. (Which I believe is what you are getting to as well).

0 Kudos
Reply