Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
163 Views

DecodeFrameAsync return MFX_ERR_UNSUPPORTED and MFX_ERR_UNDEFINED_BEHAVIOR

Hello,

I have use reference code from sample_decode tutorial code to decode Multiple camera streams. I m decoding Multiple streams using Multi threaded Environment. I have Use Hardware with system memory. 

Problem :

 I m able to decode FullHD streams 9-10 channel after that DecodeFrameAsync returns MFX_ERR_UNSUPPORTED and MFX_ERR_UNDEFINED_BEHAVIOR randomly. It seems that due to that some video quality get degraded.

why DecodeFrameAsync returns this error codes on increasing number of streams ?

How to handle this error in code ?

 

 

 

 

0 Kudos
12 Replies
Highlighted
163 Views

This behavior could be caused by multiple things.  The cases I've seen most frequently are:

  1. Allocator code in Media SDK tutorials did not anticipate some cases that arise from multithreaded use (we are hoping to update soon).  This problem does not exist if you start from the sample code.
  2. If you are doing your own multithreaded implementation, the application needs to handle surface locking in addition to Media SDK's internal lock reporting.

A good starting point for a multithreaded implementation is sample_multi_transcode.  It could be more straightforward to modify this sample to remove the encode part of the pipeline than to build up from sample_decode to a multithreaded version. 

 

0 Kudos
Highlighted
163 Views

It would be nice for us if you provide Simple_decode sample for multiple files.

0 Kudos
Highlighted
163 Views

I have Modified simple_decode using sample_multi_transcode code reference. When I run my sample 

Hardware Inilisation , Decoder Init works fine. DecodeFrameAsync return MFX_ERR_MORE_DATA , After feeding next frame it return MFX_ERR_DEVICE_FAILED. 

Please help me to fix this. Attaching my code.

 

 

0 Kudos
Highlighted
163 Views

Definitely understand the need for more examples.  I am working on one very similar to what you've described but will be out next week at a conference. 

For now, the main issue with your seems to be managing the status codes output by DecodeFrameAsync.  Media SDK's decode, vpp, and encode functions work like state machines.  These can be tricky to get right, so it helps to start with a working example.

Here is what a "decode one frame" operation can look like written written in a format that is easier to understand (for me at least) :

mfxStatus MediaSDKPipeline::DecodeOneFrame()
{
    mfxStatus sts = MFX_ERR_NONE;

    int nIndex=DecEncGetFreeSurfaceIndex(); 
    if (nIndex<0) throw runtime_error("Decode could not find free surface");

    bool stilldecoding=true;
    mfxFrameSurface1 *pmfxOutSurface;

    while (stilldecoding) { //keeps looping through multiple states until MFX_ERR_NONE indicates finish
       
        sts = mfxDEC->DecodeFrameAsync(&mfxBS, pSurfaces[nIndex], &pmfxOutSurface, &syncp);
        if (MFX_ERR_NONE == sts) {
            stilldecoding=false;
            continue;
        }
        if (MFX_WRN_DEVICE_BUSY == sts)
            usleep(1000);  // just wait and then repeat the same call to DecodeFrameAsync

        if (MFX_ERR_MORE_DATA == sts) {
            sts = ReadBitStreamData(&mfxBS, fSource);       // Read more data to input bit stream
            if (sts!=MFX_ERR_NONE) break;
        }

        if (MFX_ERR_MORE_SURFACE == sts || MFX_ERR_NONE == sts) {
            nIndex = DecEncGetFreeSurfaceIndex();       // Find free frame surface
            if (nIndex<0) throw runtime_error("Decode could not find free surface");
        }
    }

    if (sts<0) return sts;
    
    int surfnum=surfLookup.at(pmfxOutSurface);

    switch (pipelinedef)
    {
    case PIPELINE_DEC_VPPOUT:
        surfaceQvpp.push_back(surfnum);
        vpp_lock[surfnum]=true;
        break;
    case PIPELINE_DEC_VPPOUT_ENC:
        surfaceQvpp.push_back(surfnum);
        surfaceQenc.push_back(surfnum);
        vpp_lock[surfnum]=true;
        enc_lock[surfnum]=true;
        break;
    case PIPELINE_DEC_ENC:
        surfaceQenc.push_back(surfnum);
        enc_lock[surfnum]=true;
        break;
    default:
        throw runtime_error("Unsupported pipeline for decodeOneFrame");
    }
    return sts;
}

I've included some of the locking code to show how the application can manage keeping surfaces out of the free surface index search. 

Sorry I don't have the rest of the example ready to publish quite yet.  Understand you probably want some working code now though. Before the new example is published, the quickest path to simulating a threaded decode-only pipeline is probably with sample_multi_transcode like I'll describe below.

You can start (with slow output so you can check results) by constructing a par file like this:

-i::h264 in.h264  -e::nv12 -o::raw out1.nv12
-i::h264 in.h264  -e::nv12 -o::raw out2.nv12

and run like this

sample_multi_transcode -par test.par

 

If you want to turn off the slow output you can comment out the fwrite in sample_utils.cpp.  To start looking at how you can write your own see the transcode function in sample_multi_transcode' in pipeline_transcode.cpp.  This also has a DecodeOneFrame you can use for reference.

Hope this helps.

0 Kudos
Highlighted
163 Views

Hello Guyes,

as suggested I have change my code as per sample_multi_transcode.

I have change in my initlization code. After doing this facing same error.

I m attaching my hardware initilization and frame allocator code in file.

Please look at same and help me to figure it out where i m making mistake.

 

0 Kudos
Highlighted
Moderator
163 Views

Hi Chintan,

Sorry for the late response.

Do you still have this issue?

If yes, then I am little bit confused with your code since it missed a critical function call SyncOperation(). I hope you had this call but not included in the code sample you attached.

Mark 

0 Kudos
Highlighted
163 Views

Hi Liu,

Yes, I have not attached SyncOperation() code.

can u please check hardware and frame allocator code. Is everything ok in that?

0 Kudos
Highlighted
Moderator
163 Views

Ah, good reminder.

If you are using the allocator code from tutorial code, it might be not updated for quite a while.

So could you check the allocator code from our sample code at https://github.com/Intel-Media-SDK/MediaSDK/tree/master/samples

The allocator code should be at "sample_common" directory.

Mark 

0 Kudos
Highlighted
163 Views

Hi Liu,

I have use allocator code from Samples R3 for Intel(R) Media Server Studio 2017 for Windows 8.0.24.271 this release.

and also i have use general_allocator object in my code.

0 Kudos
Highlighted
Moderator
163 Views

HI Chintan,

Based on your description before, it seems like you were able to run 9~10 inputs for a while, how long did you run before it get failed?

Normally, when we debug, we start from simple but working scenario and gradually add more functions:

1. You might try the same inputs to sample_multi_transcode and see if this causes the same errors, this could tell if this is the programming problem or the MSDK bug.

2. If this is the problem of your program, you might to simplify the code to check, for example, to cut the input to only one to see if it is still working, then increase it to two inputs...

3. Answer above question is also important, only decode one frame or few frames could tell us a lot of information about the issue.

Hope this helps.

Mark 

0 Kudos
Highlighted
Beginner
163 Views

hello, l meet this mistake too, this code:

mfxBitstream g_mfxBS;
std::auto_ptr<CSmplBitstreamReader>  m_FileReader;
#define g_size 8 * 1024 * 1024

int _tmain(int argc, _TCHAR* argv[])
{
    mfxIMPL impl = MFX_IMPL_AUTO;
    mfxVersion ver = { 0, 1};
    MFXVideoSession mfxSession;
    mfxStatus sts = mfxSession.Init(impl, &ver);
    MFXVideoDECODE mfxDEC(mfxSession);


    m_FileReader.reset(new CH264FrameReader());
    sts = m_FileReader->Init(L"input.h264");


    mfxVideoParam mfxVideoParams;
    memset(&mfxVideoParams, 0, sizeof(mfxVideoParams));
    mfxVideoParams.mfx.CodecId = MFX_CODEC_AVC;
    mfxVideoParams.IOPattern = MFX_IOPATTERN_OUT_SYSTEM_MEMORY;
    //mfxVideoParams.AsyncDepth = 4;
    //mfxVideoParams.mfx.Rotation = MFX_ROTATION_0;
    //mfxVideoParams.mfx.FrameInfo.FourCC = MFX_FOURCC_NV12;
    //mfxVideoParams.mfx.NumThread = 1;


    g_mfxBS.Data = new mfxU8[g_size];
    g_mfxBS.MaxLength = g_size;


    mfxBitstream*       pBitstream = &g_mfxBS;
    do
    {
        m_FileReader->ReadNextFrame(pBitstream);
        sts = mfxDEC.DecodeHeader(pBitstream, &mfxVideoParams);

    } while (sts == ERROR_MORE_DATA);


    mfxFrameAllocRequest Request;
    memset(&Request, 0, sizeof(Request));
    sts = mfxDEC.QueryIOSurf(&mfxVideoParams, &Request);

    //mfxFrameAllocResponse mfxResponse;

    mfxU16 numSurfaces = Request.NumFrameSuggested;

    // Initialize the Media SDK decoder
    sts = mfxDEC.Init(&mfxVideoParams);
    

    // Allocate surface headers (mfxFrameSurface1) for decoder
    mfxFrameSurface1** pmfxSurfaces = new mfxFrameSurface1*[numSurfaces];
    for (int i = 0; i < numSurfaces; i++)
    {
        pmfxSurfaces = new mfxFrameSurface1;
        memset(pmfxSurfaces, 0, sizeof(mfxFrameSurface1));
        memcpy(&(pmfxSurfaces->Info), &(mfxVideoParams.mfx.FrameInfo), sizeof(mfxFrameInfo));
        //pmfxSurfaces->Data.MemId = mfxResponse.mids; // MID (memory id) represent one D3D NV12 surface
    }

    int nIndex = GetFreeSurfaceIndex(*pmfxSurfaces, numSurfaces);


    mfxFrameSurface1* pmfxOutSurface = NULL;
    mfxSyncPoint  syncp;
    m_FileReader->ReadNextFrame(pBitstream);
    sts = mfxDEC.DecodeFrameAsync(pBitstream, pmfxSurfaces[nIndex], &pmfxOutSurface, &syncp);

 

 

why did sts = mfxDEC.DecodeFrameAsync(pBitstream, pmfxSurfaces[nIndex], &pmfxOutSurface, &syncp);

 

sts is MFX_ERR_UNDEFINED_BEHAVIOR?

 

0 Kudos
Highlighted
Moderator
163 Views

Hi Shaw,

Could you write a new post of your problem?

It will be hard to manage all the posts if you are posting at other people's tail. If you want to save the description, you can refer to this post by putting a link to this post.

Mark

0 Kudos