Media (Intel® Video Processing Library, Intel Media SDK)
Access community support with transcoding, decoding, and encoding in applications using media tools like Intel® oneAPI Video Processing Library and Intel® Media SDK
Announcements
The Intel Media SDK project is no longer active. For continued support and access to new features, Intel Media SDK users are encouraged to read the transition guide on upgrading from Intel® Media SDK to Intel® Video Processing Library (VPL), and to move to VPL as soon as possible.
For more information, see the VPL website.

streaming with Media SDK

ujarijam
Beginner
1,509 Views
I am using Intel Media SDK for transcoding the RTP streaming. Source has TS RTP pockets.It is read by FFMPEG av_read_frame(pFormatCtx, &packet) API.At the moment I am reading only video frames and corresponding timestamps. Video data is in Mpeg2 video data I want to convert into H264 video data by using Media SDK transcode API.After having transcoded TS and RTP pockets are made further re streamed to destination.The idea is to convert into h264 data using media SDK.

Now, problem is Media sdk does not transcode frame basis.

when I give input(Mpeg2 video data of 97501 bytes and Timestamp say ts1) it asks for more data and m_Bitstream.DataOffset is set to 97k. I gave more data of 6794 bytes and timestamp say ts2 by calling av_read_frame API. But I set m_Bitstream.DataOffset to NULL. Transcoder API gives Transcoded output. Whenever Trascoder asks for more data I gave input data and Time stamps.

Probelem I have found is time stamps at out are not comming correctly.As a result of this I could not able to do streaming properly. If Media sdk can able to transcode frame by frame I believe time stamps will not have any problem.

Can any one help me out???
0 Kudos
19 Replies
IDZ_A_Intel
Employee
1,509 Views
Hi ujarijam,

Media SDK does work on a frame by frame basis. However, since the recommended behavior, for best performance, of the Media SDK encoder (and decoder) is to use it in a asynchronous fashion, it is common that the call to EncodeFrameAsync will return -10 (MORE_DATA) so that additional frame encode requests can be queued up. This approach is used to gain best performance and is also showcased in the Media SDK samples.

If you requireEncodeFrameAsync to always return complete frame please use it in a synchronous fashion by calling SyncOperation directly after the call.

Note that, if you want to use Media SDK for streaming or video conferencing purposes, then you should use the most recent beta release of Media SDK 3.0. Media SDK 2.0 is not suitable for that kind of workload due to high latency (among other things). For info on how to configure Media SDK for low latency please see the following forum item:
http://software.intel.com/en-us/forums/showthread.php?t=86910&o=a&s=lr

Regarding time stamps. If you use Media SDK encode in an asynchronous fashion, please make sure to write the time stamp to the surface (Data.TimeStamp) before callingEncodeFrameAsync. When frame has been fully encoded the time stamp can be found in the bitstream after calling SyncOperation. If you are using the synchronous approach you can keep track of the timestamps yourself (no need to write to surface).

As for the transcode scenario you describe, decoding a MPEG2 bitstream. The same as above is true for timestamps in this case. For the async approach please set timestamp in bitstream structure before calling DecodeFrameAsync. As for the synchronous approach, timestamps can be handled manually as in the encode case.

Regards,
Petter
0 Kudos
ujarijam
Beginner
1,509 Views
Thanks for promt reply.
Let me explain to you clearly what I am doing.
I am usingVersion 3.0.442.32245 Media SDK. I am usingsample_multi_transcode for my transcoding(from Mpeg2 video to h264 video) and streaming. I am calling Trancode() function for transcode.
I set follwoing
1 m_mfxDecParams.mfx.DecodedOrder = 0; in
mfxStatus CTranscodingPipeline::InitDecMfxParams(sInputParams *pInParams,mfxBitstream* m_pmfxBS1) function.
2.m_mfxEncParams.mfx.GopRefDist = 1; for no b-frames in
mfxStatus CTranscodingPipeline::InitEncMfxParams(sInputParams *pInParams) function
My input time stamps are .These timestamps are supplied through structure m_pmfxBS
PTSin= 60508
PTSin= 64809
PTSin= 66969
PTSin= 71289
PTSin= 75609
PTSin= 79929
PTSin= 82089
PTSin= 86396
PTSin= 90716
PTSin= 92876
PTSin= 97196
PTSin= 99356
PTSin= 103676
PTSin= 107996
My output time stamps are these time stamps are obtained through structure
pBitstreamEx->Bitstream.TimeStamp
PTSout= 60508
PTSout= 64809
PTSout= 66969
PTSout= 60508
PTSout= 75609
PTSout= 79929
PTSout= 71289
PTSout= 86396
PTSout= 90716
PTSout= 82089
PTSout= 97196
PTSout= 99356
PTSout= 92876
PTSout= 107996
if you look at the time stamps for every two time stamps smaller time stamp I am getting. why it is different time stamp? streaming is not correctly getting.
Can you pl. help me?
0 Kudos
IDZ_A_Intel
Employee
1,509 Views
Hi ujarijam,

The transcode sample was written to illustrate transcode pipeline for optimal throughput. To achieve this, the encode, vpp, decode operations in the pipeline are made as asynchronous as possible also leading to out of order delivery of encoded frames in bit stream.

In streaming use cases throughput is likely not the most central feature but instead latency and robustness etc. If you are requiring encoder to deliver frames in a ordered manner you have to make some changes to the sample to introduce either additional sync points or by reordering the bit stream chunks delivered after SyncOperation is called.

Regards,
Petter
0 Kudos
ujarijam
Beginner
1,509 Views
Thanks once again for prompt answer
what changes I need to make to get time stamps increasing order after calling encoder? can you give me one exaple Please?
Thanking you in advance
0 Kudos
IDZ_A_Intel
Employee
1,509 Views
Hi,
Since we do not have an available sample code for this scenario I unfortunately do not have any code I can clip out and send to you. Could you please explore the two suggestions I gave in my previous post. If you still are not able to progress then let us know and we may be able to find some time to create some code examples.
Regards,
Petter
0 Kudos
ujarijam
Beginner
1,509 Views
Hi,
Thanks for answer.
I have some more questions from your above answers.
1. you said that "...Media SDK does work on a frame by frame basis....."
what changes must be made in present code to make it frame by frame as Media SDk transcoding function is multi frame function? (note that in transcode decode is called first and encode is called later,if I dont want to use vpp function) I have observed that even I gave input data as large as 300k bytes, still decoder asks for more data!!!!.
2.you also said that " ....If you are requiring encoder to deliver frames in a ordered manner you have to make some changes to the sample to introduce either additional sync points or...."
How and where sync points between decoder and encoder.
3. If you look at above my second queston where in list of input time stamps and output time stamps, you observe that in output time stamps list first and fourth are same . Means that only once first Ts is duplicated. Can you give me comment on this?
Thanking you in advance.
0 Kudos
IDZ_A_Intel
Employee
1,509 Views
Hi ujarijam,

1. What I meant by "Media SDK work on a frame basis" was really to say that the API does not expose the ability to handle decode or encode on a slice level.
sample_multi_transcode is written in a way to showcases good performance which is achieved by processing frames in an asynchronous manner. As a resultframes will likely be delivered out of order as soon as they have been processed.

Both encoder and decoder sample implementations also work in this way. For instance, what happens during decode is that DecodeFrameAsync is called repeatedly (this is the reason you are asked to input more data) until all buffers have been allocated. At this point we are forced to sync to retrieve decoded frame.

To ensure frames are delivered in order and if your application is not dependent on optimal throughput then you can certainly use the API in a synchronous manner. For instance:
DecodeFrameAsync()
SyncOperation()
EncodeFrameAsync()
SyncOperation()

Note that none of the available samples are written in this way. That said, you should be able to reuse the common code parts.

2. SyncOperation call is the call I refer to. When calling SyncOperation Media SDK will wait until the scheduled operation is ready.

3. I cannot reproduce duplicate timestamps here. Can you please check your implementation or the place where you print the timestamp data to make sure it is correct?

Regards,
Petter
0 Kudos
ujarijam
Beginner
1,509 Views
Hi Petter,
Thanks for answers,
1. I am on the process of changing the current code to make it work to give output frame by frame as per your answers
2. You asked "...Can you please check your implementation or the place where you print the timestamp data to make sure it is correct?"
I usepBitstreamEx->Bitstream.TimeStamp to print inmfxStatus CTranscodingPipeline::PutBS(mfxBitstream *stOutPutBitStream) function in pipeline_transcode.cpp file
3. I have one more question to ask you. does Media SDK h264 encoder give multiple sps and pps ? if so, what settings to be made? Multiple sps and pps may be required for streaming.
0 Kudos
ujarijam
Beginner
1,509 Views
Hi Petter,
I am reproducing theDecodeOneFramefunction here where is syncoperation function is added immediate after whenever sdk asks for more data

mfxStatus CTranscodingPipeline::DecodeOneFrame(ExtendedSurface *pExtSurface)
{
MSDK_CHECK_POINTER(pExtSurface, MFX_ERR_NULL_PTR);
mfxStatus sts = MFX_ERR_MORE_SURFACE;
mfxFrameSurface1 *pmfxSurface = NULL;
pExtSurface->pSurface = NULL;
mfxU32 i = 0;
while (MFX_ERR_MORE_DATA == sts || MFX_ERR_MORE_SURFACE == sts || MFX_ERR_NONE < sts)
{
if (MFX_WRN_DEVICE_BUSY == sts)
{
Sleep(TIME_TO_SLEEP); // just wait and then repeat the same call to DecodeFrameAsync
}
else if (MFX_ERR_MORE_DATA == sts)
{
sts = m_pBSProcessor->GetInputBitstream(&m_pmfxBS); // read more data to input bit stream
MSDK_BREAK_ON_ERROR(sts);
}
else if (MFX_ERR_MORE_SURFACE == sts)
{
// find new working surface
for (i = 0; i < MSDK_DEC_WAIT_INTERVAL; i += 5)
{
pmfxSurface = GetFreeSurface(true);
if (pmfxSurface)
{
break;
}
else
{
Sleep(TIME_TO_SLEEP);
}
}
MSDK_CHECK_POINTER(pmfxSurface, MFX_ERR_MEMORY_ALLOC); // return an error if a free surface wasn't found
}
sts = m_pmfxDEC->DecodeFrameAsync(m_pmfxBS, pmfxSurface, &pExtSurface->pSurface, &pExtSurface->Syncp);
// code added here
if(MFX_ERR_MORE_DATA == sts)//ujar
{
sts =m_pmfxSession->SyncOperation(pExtSurface->Syncp, MSDK_WAIT_INTERVAL);
}
// ignore warnings if output is available,
if (MFX_ERR_NONE < sts && pExtSurface->Syncp)
{
sts = MFX_ERR_NONE;
}
} //while processing
return sts;
}
when one frame of data is given, DecodeFrameAsync() asks for more data. if a Sync operation is called like above it returnsMFX_ERR_NULL_PTR. That means there is no output, and exits the code.
Help me to how to proceed to make it work as frame by frame?
0 Kudos
IDZ_A_Intel
Employee
1,509 Views
Hi,

Let me first comment on your previous post (Q3).You can use the IdrInteval parameter to control how often IDRs should be generated. Media SDK generate one SPS/PPS per IDR.

Regarding your code snippet.
MFX_ERR_MORE_DATA means that the decoder does not have enough data to be able to decode one frame. Application must insert more data into bit stream buffer so that a complete frame becomes available to the decoder. The sample code as you can see has logic to read more data into the bit stream.
So, regarding your code, do not call SyncOperation untilDecodeFrameAsync returnsMFX_ERR_NONE.

Regards,
Petter
0 Kudos
ujarijam
Beginner
1,509 Views
Hi Petter,
Thanks for your reply once again.
I did what ever you asked me to do to get right timestamps. I am getting correct time stamps. I am able to do streaming nicely. As long as stInputParam->libType=UV_MFX_IMPL_SOFTWARE is set RTP streaming is working fine. I am using i7 machine with quick sync processor. But when I setstInputParam->libType=UV_MFX_IMPL_HARDWARE orUV_MFX_IMPL_AUTO_ANY I get return message MFX_ERR_DEVICE_FAILEDfromsts as shown in follwoing code. As you know that I am using trancode part of the code to trascode from mpeg2 to h264
// Init encode
if (m_pmfxENC.get())
{
sts = m_pmfxENC->Init(&m_mfxEncParams);
MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);
}
// Init encode
if (m_pmfxENC.get())
{ sts = m_pmfxENC->Init(&m_mfxEncParams);
MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);
}
can u pl. help me? why this error message? how to resolve it as I am interested to use quick sync GPU so as to reduce load on CPU.
0 Kudos
IDZ_A_Intel
Employee
1,509 Views
Hi ujarijam,

Can you please confirm if the default Media SDK transcode sample (or decode, encode sample) works with HW acceleration?

If it does, then the issue likely has to do with the code changes you've made to the transcode sample.

The ERR_DEVICE_FAILED unfortunately does not give much info of what is going on. To be able to help you better can you please supply more details of the code changes you've made and some further detailsabout your machine such as CPU name, driver version, laptop/desktop, other discrete graphics card installed etc.

Regards,
Petter
0 Kudos
ujarijam
Beginner
1,509 Views
Hi Petter,
Thanks for answer.
Media sdk sample works with HW acceleration.
After debugging I found that Media SDK was not gettingsequence_header_code from input. Once it is fixed I am able to get transcoded output. But time stamps are not correct.
I have few questions
1. It did not happen when UV_MFX_IMPL_SOFTWAREwas set. why?
2. the code which was working forlibType=UV_MFX_IMPL_SOFTWARE well nice streaming with correct time stamps but when just changed tolibType=UV_MFX_IMPL_HARDWARE follwoing changes were observed
a) there was no correct time stamps ?
b) H264 output was also found to be different.Access unit delimiter nalu (00 00 00 01 09...) two bytes were followed by SPS, PPS,SEI and IDR frames and agianAccess unit delimiter, SEI,PPS,SEI and P frame and so on
WhyAccess unit delimiter ? where as not inUV_MFX_IMPL_SOFTWARE was set in ?
Pl. help me to fix this issue
0 Kudos
ujarijam
Beginner
1,509 Views
Hi Petter,
I am sorry I have not answered all you questions in your #12.
My Desk top PC details:
Processor : IntelCorei7-2600k CPU@3.4GHz 3.4GHz
Installed memory(RAM): 8.00 GB (3.16 GB usable)
System type : 32-bit Operating System (windows 7 ultimate)
Display adaptors : Intel HD Graphics Family
Driver version: 8.15.10.2372
Driver date : 4/15/2011

mother board : ASUS P8 Z68-V PRO


The following are changes are made in the code.
I call FFMPEG RTP source API and call my own developed network RTP out put dump APIs

mfxStatus CTranscodingPipeline::InitDecMfxParams(sInputParams *pInParams,mfxBitstream* m_pmfxBS1)
{

----------
m_mfxDecParams.mfx.DecodedOrder = 0;// we get in display order
----------

}//mfxStatus CTranscodingPipeline::InitDecMfxParams(sInputParams *pInParams,mfxBitstream* m_pmfxBS1)
mfxStatus CTranscodingPipeline::InitEncMfxParams(sInputParams *pInParams)
{
MSDK_CHECK_POINTER(pInParams, MFX_ERR_NULL_PTR);
m_mfxEncParams.mfx.CodecId = pInParams->EncodeId;
m_mfxEncParams.mfx.TargetUsage = pInParams->nTargetUsage; // trade-off between quality and speed
m_mfxEncParams.mfx.RateControlMethod = MFX_RATECONTROL_CBR;

m_mfxEncParams.mfx.EncodedOrder = 0; // binary flag, 0 signals encoder to take frames in display order
m_mfxEncParams.mfx.NumSlice = pInParams->nSlices;
// added
m_mfxEncParams.mfx.GopRefDist = 1; //no B frames
//m_mfxEncParams.mfx.IdrInterval = 0;
m_mfxEncParams.AsyncDepth = 1;
m_mfxEncParams.Protected = 0;
m_mfxEncParams.mfx.CodecProfile = MFX_PROFILE_AVC_BASELINE;
m_mfxEncParams.mfx.GopPicSize = 30;
m_mfxEncParams.mfx.MaxKbps = 2000;
//m_mfxEncParams.mfx.TargetKbps = 1500;
m_mfxEncParams.mfx.InitialDelayInKB = 0;
m_mfxEncParams.mfx.NumRefFrame = 1;
m_mfxEncParams.mfx.FrameInfo.PicStruct = MFX_PICSTRUCT_PROGRESSIVE;
---------------

}//mfxStatus CTranscodingPipeline::InitEncMfxParams(sInputParams *pInParams)

int main(mfxI32 argc,mfxI8 **argv)
{
sInputParams stInputParams;
-------
stInputParam.DecodeId = UV_MFX_CODEC_MPEG2;
stInputParam.EncodeId = UV_MFX_CODEC_AVC;
stInputParam.nTargetUsage=MFX_TARGETUSAGE_BALANCED;
//stInputParam->nAsyncDepth=0;
stInputParam.nBitRate = 1500;
stInputParam.libType =UV_MFX_IMPL_HARDWARE;//MFX_IMPL_SOFTWARE;


while(av_read_frame(pFormatCtx, &packet)>=0)//FFMPEG API to read data from input port
{
if(packet.stream_index==videoStream)
{
if(packet.data[3]!=0xb3)//searching for sequence_header_code
continue;
m_PTS = streams->parser->pts - streams->start_time;//timestamp
if(m_PTS==prevTS2)//skip if same Time stamp
continue;
prevTS2=m_PTS;
memcpy(ubFileBuffer,packet.data,packet.size);
ulTotalBytesInBuffer = packet.size;

//construct input Bitstream structure
m_Bitstream.Data=ubFileBuffer;
m_Bitstream.DataLength =ulTotalBytesInBuffer;
m_Bitstream.DataOffset =0;
m_Bitstream.MaxLength = ullStreamLength;
m_Bitstream.TimeStamp=m_PTS;
stInputData.ulInputDataSize[0] = packet.size;
stInputData.ullTimeStamp[0] = (mfxU64)m_PTS;
prevTS1=m_PTS;
m_pmfxBS = &m_Bitstream; //m_pmfxBS is passed into lib

// Init Transcode-Lib
sts = UvTranscode.Init(stInputParams,m_SessionArray,&m_pSessionArray,m_pmfxBS);
if(MFX_ERR_MORE_DATA_UVA==sts)
continue;
else
break;
}
else
continue;

}


MSDK_CHECK_PARSE_RESULT(sts, MFX_ERR_NONE, 1);

ThreadTranscodeContext *pContext = m_pSessionArray;
pContext->transcodingSts = MFX_ERR_NONE;
//Get output Buffer
sts = GetOutputBuffPointer(&stOBS);
stOutBitstream = &stOBS;
iTotalFrames = *stNwkRendParam.lNumbFrames;
//start transcoding
while(1)
{
if (FrameNumber
{
// if push model and need more data - try to get bitstream
if (MFX_ERR_NONE == pContext->transcodingSts)
{
pContext->transcodingSts = pContext->pPipeline->Transcode(stOutBitstream);

}
if(m_Bitstream.DataOffset)
if(MFX_OUTPUT_BUFFER_ENABLE == pContext->transcodingSts)
{


//correcting corrupt time stamps (work around) Lib supposed to give correct time stamps
if(stOutBitstream->TimeStamp {
if(m_Bitstream.DataOffset == stInputData.ulInputDataSize[0])
stOutBitstream->TimeStamp=stInputData.ullTimeStamp[0];

}
//send it to make TS and RTP pockets and sent to output port
SendToPort(stOutBitstream);
ulPrevTSOut = stOutBitstream->TimeStamp;

//keep in queue
for(int j=0;j {
stInputData.ulInputDataSize = stInputData.ulInputDataSize[j+1];
stInputData.ullTimeStamp = stInputData.ullTimeStamp[j+1];

}
ulInputIndex = ulFilledInIndices-2;

memset(stOutBitstream->Data + stOutBitstream->DataOffset,0,stOutBitstream->DataLength);
stOutBitstream->DataLength =0;
stOutBitstream->DataOffset =0;
pContext->transcodingSts = MFX_ERR_NONE;
FrameNumber++;
bFiledump=true;
}
if(m_Bitstream.DataOffset)
{
if ((MFX_ERR_MORE_DATA_UVA == pContext->transcodingSts)||(MFX_ERR_NONE == pContext->transcodingSts))
{
//Read the data more
REPEAT: if(av_read_frame(pFormatCtx, &packet)>=0)//FFMPEG API to read data from input port
{
if(packet.stream_index==videoStream)
{
if(bFiledump==false)
{
m_PTS = streams->parser->pts - streams->start_time;//timestamp
if(m_PTS==prevTS2)
goto REPEAT;
prevTS2=m_PTS;
memcpy(m_Bitstream.Data + ulTotalBytesInBuffer,packet.data,packet.size);
ulTotalBytesInBuffer += packet.size;
m_Bitstream.DataLength=ulTotalBytesInBuffer;
m_Bitstream.DataOffset=0;
m_Bitstream.TimeStamp=stInputData.ullTimeStamp[0];
ulInputIndex++;
stInputData.ulInputDataSize[ulInputIndex] = packet.size;
stInputData.ullTimeStamp[ulInputIndex] = (mfxU64)m_PTS;
}
else if(bFiledump==true)
{
m_PTS = streams->parser->pts - streams->start_time;//timestamp
if(m_PTS==prevTS2)
goto REPEAT;
prevTS2=m_PTS;
memcpy(m_Bitstream.Data,m_Bitstream.Data + m_Bitstream.DataOffset,m_Bitstream.DataLength);
memset(m_Bitstream.Data+m_Bitstream.DataLength,0,m_Bitstream.DataOffset);
m_Bitstream.DataOffset=0;
memcpy(m_Bitstream.Data + m_Bitstream.DataLength,packet.data,packet.size);
m_Bitstream.DataLength +=packet.size;
ulTotalBytesInBuffer=m_Bitstream.DataLength;
m_Bitstream.TimeStamp=stInputData.ullTimeStamp[0];
ulInputIndex++;
stInputData.ulInputDataSize[ulInputIndex] = packet.size;
stInputData.ullTimeStamp[ulInputIndex] = (mfxU64)m_PTS;
bFiledump=false;

}
m_pmfxBS = &m_Bitstream; //m_pmfxBS is passed into lib
ulFilledInIndices = ulInputIndex +1;

prevTS1 = stInputData.ullTimeStamp[0];

}
else
goto REPEAT;
}
else
{
if(!packet.size)//incase input streaming is is stopped and likely to restart
{
Sleep(1000);
goto REPEAT;
}
break;//file is over
}
pContext->transcodingSts = MFX_ERR_NONE;
continue;
}//if (MFX_ERR_MORE_DATA _UVA== pContext->transcodingSts)
}
else
pContext->transcodingSts = MFX_ERR_NONE;


}//if (FrameNumber<1000)
else
{
//cleanup
if(stOutBitstream->Data )
{
delete stOutBitstream->Data;
stOutBitstream->Data=NULL;
}
if(ubFileBuffer)
{
delete ubFileBuffer;
ubFileBuffer=NULL;
}
break;
}

// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
}
---------
//display process result
sts = UvTranscode.ProcessResult(m_StartTime);
MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, 1);

exit(0);
}//main

mfxStatus CTranscodingPipeline::Transcode(mfxBitstream *stOutBS)
{
static bool OutEnable =false;
-------------
---------------
while (MFX_ERR_NONE == sts )
{

if (bNeedDecodedFrames)
{
if (!bEndOfFile)
{
sts = DecodeOneFrame(&DecExtSurface);
if (MFX_ERR_MORE_DATA == sts)
{
sts = DecodeLastFrame(&DecExtSurface);
bEndOfFile = true;

}
if(MFX_ERR_MORE_DATA_UVA==sts)
{
return MFX_ERR_MORE_DATA_UVA;

}

}
-----------
}//if (bNeedDecodedFrames)

if(OutEnable == false)
pBS->Bitstream= *stOutBS;
----------
----------
if (m_BSPool.size() == m_AsyncDepth)
{
sts = PutBS(stOutBS);
MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);
OutEnable=true;
return MFX_OUTPUT_BUFFER_ENABLE;
}
}//while (MFX_ERR_NONE == sts )

mfxStatus CTranscodingPipeline::PutBS(mfxBitstream *stOutPutBitStream)
{

mfxStatus sts = MFX_ERR_NONE;
ExtendedBS *pBitstreamEx = m_BSPool.front();

sts = m_pmfxSession->SyncOperation(pBitstreamEx->Syncp, MSDK_WAIT_INTERVAL);
MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);

stOutPutBitStream->DataLength = pBitstreamEx->Bitstream.DataLength;

stOutPutBitStream->DataOffset = pBitstreamEx->Bitstream.DataOffset;
stOutPutBitStream->DataFlag = pBitstreamEx->Bitstream.DataFlag;
stOutPutBitStream->FrameType = pBitstreamEx->Bitstream.FrameType;
stOutPutBitStream->PicStruct = pBitstreamEx->Bitstream.PicStruct;
stOutPutBitStream->TimeStamp = pBitstreamEx->Bitstream.TimeStamp;
pBitstreamEx->Bitstream.DataLength = 0;
pBitstreamEx->Bitstream.DataOffset = 0;

m_BSPool.pop_front();
m_pBSStore->Release(pBitstreamEx);

return sts;
} //mfxStatus CTranscodingPipeline::PutBS()

mfxStatus CTranscodingPipeline::DecodeOneFrame(ExtendedSurface *pExtSurface)
{
------
-------
sts = m_pmfxDEC->DecodeFrameAsync(m_pmfxBS, pmfxSurface, &pExtSurface->pSurface, &pExtSurface->Syncp);
if (MFX_ERR_MORE_DATA == sts)
{
//data is not sufficient
Status = sts=MFX_ERR_MORE_DATA_UVA;
break;
}
--------
}//mfxStatus CTranscodingPipeline::DecodeOneFrame(ExtendedSurface *pExtSurface)
0 Kudos
ujarijam
Beginner
1,509 Views
Hi,

I also observed that
A. Suppose I set libtype=MFX_IMPL_SOFTWARE .

1st call to av_read_frame() sets
m_Bitstream.DataLength= 97501 bytes
m_Bitstream.TimeStamp = 60508
m_Bitstream.DataOffset =0
and sent into transcode function to transcode but returns MFX_ERR_MORE_DATA_UVA.
and parameters are set to

m_Bitstream.DataLength= 3
m_Bitstream.TimeStamp = 60508
m_Bitstream.DataOffset =97498
stOutBitstream->DataLength = 0
stOutBitstream->TimeStamp = 0

2nd call to av_read_frame() sets
m_Bitstream.DataLength= 97501 +6794 = 104295
m_Bitstream.TimeStamp = 60508
m_Bitstream.DataOffset =0

and sent to transcoder() it returns as MFX_OUTPUT_BUFFER_ENABLE.( Meaning that transcoded output is available. )

and parameters are set to

m_Bitstream.DataLength= 6794
m_Bitstream.TimeStamp = 60508
m_Bitstream.DataOffset =97501
stOutBitstream->DataLength = 49610
stOutBitstream->TimeStamp =60508

3rd call to av_read_frame() sets
m_Bitstream.DataLength= 6794 + 7162 = 13956
m_Bitstream.TimeStamp = 64809
m_Bitstream.DataOffset =0

After Transcode() is called parameters are set to

m_Bitstream.DataLength= 7162
m_Bitstream.TimeStamp = 64809
m_Bitstream.DataOffset =6794
stOutBitstream->DataLength = 6515
stOutBitstream->TimeStamp =64809

You can observe that time stamps are coming correctly at the output

B. Now, keeping the same code and set libtype=MFX_IMPL_SOFTWARE

Follwoing are observed

1st call to av_read_frame() sets
m_Bitstream.DataLength= 97501 bytes
m_Bitstream.TimeStamp = 60508
m_Bitstream.DataOffset =0

After Transcode() is called parameters are set to
m_Bitstream.DataLength= 3
m_Bitstream.TimeStamp = 60508
m_Bitstream.DataOffset =97498
stOutBitstream->DataLength = 0
stOutBitstream->TimeStamp = 0

2nd call to av_read_frame() sets
m_Bitstream.DataLength= 97501 +6794 = 104295
m_Bitstream.TimeStamp = 60508
m_Bitstream.DataOffset =0

After Transcode() is called
and returns MFX_ERR_MORE_DATA_UVA and parameters are set to

m_Bitstream.DataLength= 3
m_Bitstream.TimeStamp = 60508
m_Bitstream.DataOffset =104292
stOutBitstream->DataLength = 0
stOutBitstream->TimeStamp =0

3rd call to av_read_frame() sets
m_Bitstream.DataLength= 104292+ 7162 = 111457
m_Bitstream.TimeStamp = 60508
m_Bitstream.DataOffset =0

After Transcode() is called
and returns as MFX_OUTPUT_BUFFER_ENABLE and parameters are set to

m_Bitstream.DataLength= 7162
m_Bitstream.TimeStamp = 60508
m_Bitstream.DataOffset =104295
stOutBitstream->DataLength = 16528
stOutBitstream->TimeStamp =60508

4th call to av_read_frame() sets
m_Bitstream.DataLength= 7162 + 16541= 23703
m_Bitstream.TimeStamp = 64809
m_Bitstream.DataOffset =0

After Transcode() is called
and returns as MFX_OUTPUT_BUFFER_ENABLE and parameters are set to

m_Bitstream.DataLength= 16541
m_Bitstream.TimeStamp = 60508
m_Bitstream.DataOffset =7162
stOutBitstream->DataLength = 1463
stOutBitstream->TimeStamp =60508

If you observe there is a difference in SOFT and HW setting. In HW time stamps are also not coming at the output.

Pl. help me to fix this issue. I need to use IN HW accelartor in my application







0 Kudos
IDZ_A_Intel
Employee
1,509 Views
Hi ujarijam,

Many questions. First of all, Media SDK handles timestamps in a completely transparent way. Encoder/Decoder just uses the time stamp from theinput frame and transfers it with no modifications to the output frame.From looking at your logs I suspect the issue you are having with timestamps are related to the order (as discussed earlier in the thread)they are delivered from the decoder. Can you please verify if that is the case. If so, you can either do the reordering yourself by keeping a bufferof frames "in flight" or make changes to the code to make all encode/decode calls synchronous as discussed earlier. Considering your use case thelatter may actually be a better approach (much simpler code). The transcode sample is built with a quite different goal in mind which is high throughput not astreaming and low latency use case. Unfortunately, we do not have any samples right now that illustrates this.

Media SDK Hardware and Software encoder has slightly different behavior. Do not expect the exact same output.You can control encoder/stream creation features via encoder parameter settings. For instance, to to disable AUDelimiteruse mfxExtCodingOption::AUDelimiter = MFX_CODINGOPTION_OFF. Also, PictureTimingSEI and BufferingPeriodSEI messages are likelyinserted in the stream by default. To disable you can set mfxExtCodingOption::PictureTimingSEI/VuiVclHrdParameters/VuiNalHrdParametersto MFX_CODINGOPTION_OFF.

Some comments on the supplied code:
- DecodedOrder: This parameter is deprecated in Media SDK 3.0. You can leave it as 0.
- Encoder config: Rate control is CBR. For that case you should set TargetKbps. Currently this line is commented out.
- Since I do not have insight into the interfacing components you use I'm not able to pinpoint any other issues.

Your HW platform/setup looks good. I do not foresee any issues. Especially since you've already verified that HW acceleration worksusing the unmodified samples.

Regards,
Petter
0 Kudos
ujarijam
Beginner
1,509 Views
Hi Petter,
I have few questions
1. Pl.you look at the #15 once again. I mentioned that for SW, SDK takes intitially two frames data and gives one frame output whereas for HW, SDK takes three frames and gives two frames output why this difference?
2. You mentioned in last answer that "...Media SDK Hardware and Software encoder has slightly different behavior..." what are differences exactly between software and hardware implemetation?
3. Actually, If SDK takes one frame data, without asking more data, there wont be problem in time stamps.Ok I understand that for maximum throghput corrent code has been desighned. But there should be some settings are required to tell SDK that operation is for synchronous before decoder accepts input data. This may cause SDK to accept frame by frame without asking for more data. With my experimentation I found that even after giving full working frame SDK used to ask more data.Pl give your comment on this.
Anyway, I will try to implement as per your comments.
0 Kudos
ujarijam
Beginner
1,509 Views
Hi Petter,
I addedSyncOperationin mfxStatus CTranscodingPipeline::Transcode(mfxBitstream *stOutBS)
function to make synchronous operation
sts = DecodeOneFrame(&DecExtSurface);
sts = m_pmfxSession->SyncOperation(DecExtSurface.Syncp, MSDK_WAIT_INTERVAL);
DecExtSurface.Syncp = 0;
This is followed by
VppExtSurface.pSurface = DecExtSurface.pSurface;
sts = EncodeOneFrame(&VppExtSurface,&m_BSPool.back()->Bitstream);
sts = PutBS(stOutBS);
subsequently PuBS() calls
sts = m_pmfxSession->SyncOperation(pBitstreamEx->Syncp, MSDK_WAIT_INTERVAL);
But Response of the output and time stamps are same as explained in this thread.
My problem is not fixed yet.
I have one question to ask you.can I use multiple rtp streaming using quick sync technology using trancode()or not?
is your sample sample code support it or not? is Media SDK support multiple RTP streaming using Trancode() or not?
Can you explain clearly how Media SDK works for HW accelaration. Document I have is mediasdk-man.pdf API version 1.3.
Why Media SDK decoder doesnot decode for one frame input? this question was still not answered.
pl. helpme.
HowmfxExtCodingOption::AUDelimiter = MFX_CODINGOPTION_OFF this can be set in
mfxStatus CTranscodingPipeline::InitEncMfxParams(sInputParams *pInParams) function?
0 Kudos
ujarijam
Beginner
1,509 Views
0 Kudos
Reply