Media (Intel® Video Processing Library, Intel Media SDK)
Access community support with transcoding, decoding, and encoding in applications using media tools like Intel® oneAPI Video Processing Library and Intel® Media SDK
Announcements
The Intel Media SDK project is no longer active. For continued support and access to new features, Intel Media SDK users are encouraged to read the transition guide on upgrading from Intel® Media SDK to Intel® Video Processing Library (VPL), and to move to VPL as soon as possible.
For more information, see the VPL website.
3075 Discussions

Recommended approach for decoding to two different frame sizes

James_B_9
Beginner
542 Views

Hi,

Is there a recommended way to decode a h264 bitstream to output two different frame sizes, without running two seperate decoding sessions.

For instance, can you and/or is it recommended to initialize two MFXVideoVPP objects with different output frame dimensions?  If so would it make sense to pass each decoded frame though both VPP's to get two different frame sizes?

Regards,

James.

0 Kudos
4 Replies
Jeffrey_M_Intel1
Employee
542 Views

Are you looking for a pipeline like this?

decode -> resize ->encode
       -> resize ->encode

You can try this with the sink/source par file syntax in sample_multi_transcode.

If you look through the code, you'll see that it is setting up multiple sessions with a shared queue of surfaces.  The side that writes to the queue calls it the sink, and the side that reads from it calls it the source.  

Your implementation wouldn't need to be as complex, but it would need to share surfaces from the single decode with at least one additional session to do a second resize.  Your application would also need to add an additional layer of reference counting to keep the decoded surface out of the pool until both sessions were finished with it.

Regards, Jeff

 

0 Kudos
James_B_9
Beginner
542 Views

Hi, thank you Jeff that is exactly what I was looking for excluding the encoding stage, I need the raw frames.

I have applied your approach but I am still unsure as to whether I need to join the sessions together.  The manual (API 1.7 page 10) says:

Independently operated SDK sessions cannot share data unless the application explicitly synchronizes session operations (to ensure that data is valid and complete before passing from the source to the destination session.)

Your approach works regardless of whether the sessions are joined or not, what data is the manual referring to?

My simple approach is to decode the frame then call RunFrameVPPAsync twice with the decode output as input to both.  I then call SyncOperation twice, once on each session, is this necessary or can I just call it on the  last session which I call RunFrameVPPAsync on?  That is, given the order of operations below, can I remove the first call to SyncOperation?

sts = m_mfxDEC->DecodeFrameAsync(&mfxBS, m_pDecodeSurfaces[nIndex], &pmfxOutSurface, &syncpD);

sts = m_mfxVPP->RunFrameVPPAsync(pmfxOutSurface, m_pTasks[m_nTaskIdx].m_pVPPOutSurfaces[nIndexVPP],
                    NULL, &(m_pTasks[m_nTaskIdx].syncVPP));

sts = m_mfxVPP1->RunFrameVPPAsync(pmfxOutSurface, m_pTasks1[m_nTaskIdx].m_pVPPOutSurfaces[nIndexVPP1],
                    NULL, &(m_pTasks1[m_nTaskIdx].syncVPP));

sts = m_pSession.SyncOperation(m_pTasks[m_nFirstSyncTask].syncVPP, 60000);

sts = m_pSession1.SyncOperation(m_pTasks1[m_nFirstSyncTask].syncVPP, 60000);

Regards,

James.

0 Kudos
Jeffrey_M_Intel1
Employee
542 Views

Sorry for the delayed reply.

Joins are less necessary for hardware pipelines than for SW.  For multiple software pipelines joining sessions can avoid performance issues caused by too many threads.  For hardware pipelines all instructions go through one queue managed by one thread so join has little effect on performance.  Joined sessions may save a bit of memory though.

For synchronization, you only need to synchronize at the last stage before output -- but multiple outputs means multiple syncs.

 

0 Kudos
James_B_9
Beginner
542 Views

Hi Jeffrey, thank you again for your reply.  I am still having problems using multiple sessions, can you tell me the correct procedure for initializing and destroying them.  My code is throwing an exception on certain hardware and driver versions when the second session (the one used just for VPP resizing) is destroyed, specifically mfxRes = MFXClose(m_session) in the function shown below from mxvideo++.h.

    virtual mfxStatus Close(void)

    {

        mfxStatus mfxRes;

        mfxRes = MFXClose(m_session); m_session = (mfxSession) 0;

        return mfxRes;

    }

 

When I have a single session I have no problems on any hardware or drivers I have tested. 

 

For simplicity using system memory surfaces, I initializing both sessions with

m_session.InitEx(par);

m_session1.InitEx(par1);

 

create the decoder and VPP objects,

 

m_mfxDEC = new MFXVideoDECODE(m_session);

m_mfxVPP = new MFXVideoVPP(m_session);

m_mfxVPP1 = new MFXVideoVPP(m_session1);

 

and destroy the decoder an VPP objects with

 

m_mfxDEC->Close();

delete m_mfxDEC;

m_mfxVPP->Close();

delete m_mfxVPP;

m_mfxVPP1->Close();

delete m_mfxVPP1;

The decoding appears to work perfectly however on my HD 5500 (5200U) which I initially used for testing, no exceptions were thrown using driver version 4112 (or the latest driver 4332), however upgrading to 4279 and/or 4300 caused an exception to be thrown on session destruction when mfxRes = MFXClose(m_session) is called, following the destruction of the decoder and VPP objects.  This nearly always happens the first time I destroy the decoder and VPP objects, however sometimes it will happen after a few initializations and destructions.

On my HD 530 (6700HQ) on all driver versions including the latest beta 4404, I have tried, an exception is thrown following the first time destruction of the decoder and VPP objects.  Placing a sleep before destroying the decoder and VPP objects, causes the exception to be thrown after several initializations and destructions.

Thank you,

James.

0 Kudos
Reply