Intel® Integrated Performance Primitives
Deliberate problems developing high-performance vision, signal, security, and storage applications.

MPEG2 decoder

Boris_V_1
Beginner
914 Views
I have some questions about MPEG2 decoder.
I'm trying to decode MPEG2 stream from network.
I receive stream (video only, already splitted) in 2280 bytes packets.
And I can't figure out how to do it right.
I have done some test and results are below.

When I load whole test stream in memory first call to MPEG2VideoDecoder->GetFrame()
always return UMC_NOT_ENOUGH_DATA (not sure why whole stream is in memory ?) but next call is ok
and all other frames are correctly decoded. Everything good here.
But when I set buffer size to for ex. 25000 bytes, first call to MPEG2VideoDecoder->GetFrame returns UMC_NOT_ENOUGH_DATA second also UMC_NOT_ENOUGH_DATA and third call returns UMC_OK, BUT frame is not correctly decoded only half of frame is decoded.
If I set buffer size to 15000 only quarter of frame is decoded and at 2280 there is nothing decoded at all, but GetFrame returns UMC_OK.
What I'm doing wrong ? Do I have to buffer data and then call GetFrame ?
How do I know how much data do I have to buffer ?
Here is code how I'm testing this:

UMC::MediaData dataIn;
UMC::VideoData dataOut;

read to memBuf
dataIn.SetBufferPointer(memBuf,BUF_SIZE);
...
....
while (true) {
umcRes = mpg2d->GetFrame(&dataIn, &dataOut);
if (umcRes==UMC::UMC_OK) all ok
else
if (umcRes==UMC::UMC_NOT_ENOUGH_DATA) {
read next data to memBuf
dataIn.SetBufferPointer(memBuf,BUF_SIZE);
}...
}

Any help would be appreciated.

0 Kudos
4 Replies
Leonid_K_Intel
Employee
914 Views

Hi,

Decoder requires at least one complete encoded frame on input. Application should care about it. One frame is always buffered in decoder to perform reordering. That is why you receive NOT_ENOUGH_DATA with the first call. In fact it means that there is no data for output. Just ignore it. It can also happen when field coded sequence is decoded - GetFrame returns NOT_ENOUGH_DATA after first field is decoded.

About dataIn. You should call dataIn.SetDataSize(data_size) after SetBufferPointer. Decoder gets data to be decoded from MediaData::DataPointer. After frame is decoded the decoder advances DataPointer and DataSize in dataIn by the number of consumed bytes. Remained bytes must not be dropped. If the whole stream is read you don't need to do anything with dataIn between GetFrame calls. If it isn't you need to concatenate the rest with new portion of data and set proper datasize.
Simplest estimation for input buffer size is the size of uncompressed frame. Or you can use bitrate*1 second - it is enoug in most cases.

I don't see mpg2->Init() call in your code? VideoDecoderParams::m_pData field provides sequence headers and optionally first frame. When the first frame provided it is decoded during Init and then the first call of GetFrame doesn't return NOT_ENOUGH_DATA.
To retrieve last frame, which is buffered inside decoder, call once GetFrame(0, out).

Good Luck and Regards

0 Kudos
Boris_V_1
Beginner
914 Views
Thanks for reply.

Below is my test code.
If BSIZE is 10000, about quarter of frame is decoded,
if it's more, than more of frame is decoded.
If I set BSIZE to for ex. to 1000000 than frames are poperly decoded.
What I'm doing wrong ?

#define vidW 480
#define vidH 576
#define BSIZE 10000

UMC::VideoDecoderParams VDecParams;
UMC::ColorConversionInfo ColorInit;
UMC::MediaData dataIn;
UMC::VideoData dataOut;
UMC::ColorSpaceConverter rColorConverter;

UMC::Status umcRes = UMC::UMC_OK;

HANDLE fh=CreateFile("testmp2data.bin",GENERIC_READ,FILE_SHARE_READ,NULL,OPEN_EXISTING,0,NULL);

BYTE * memBuf=(BYTE*)malloc(5*1024*1024);
DWORD br;
//Read whole file to memory
ReadFile(fh,memBuf,5000000,&br,NULL);
CloseHandle(fh);

//for testing set buffer size to BSIZE
dataIn.SetBufferPointer(memBuf,BSIZE);
dataIn.SetDataSize(BSIZE);

mpg2d=new UMC::MPEG2VideoDecoder();

VDecParams.m_pData = &dataIn;

ColorInit.FormatDest = UMC::YV12;
ColorInit.SizeSource.width = vidW;
ColorInit.SizeSource.height = vidH;

ColorInit.SizeDest.width = vidW;
ColorInit.SizeDest.height = vidH;

ColorInit.lFlags = 1;
ColorInit.lDeinterlace = 0;
ColorInit.lInterpolation = 1;

VDecParams.cformat = UMC::YV12;
VDecParams.lFlags = UMC::FLAG_VDEC_COMPATIBLE | UMC::FLAG_VDEC_NO_PREVIEW | UMC::FLAG_VDEC_REORDER;
VDecParams.lpConverter = NULL;
VDecParams.lpConvertInit = &ColorInit;
VDecParams.uiLimitThreads = 0;

umcRes = mpg2d->Init(&VDecParams);

UMC::VideoDecoderParams vParams;
umcRes = mpg2d->GetInfo(&vParams);

VDecParams.lpConvertInit->SizeSource.width=vParams.info.clip_info.width;
VDecParams.lpConvertInit->SizeSource.height=vParams.info.clip_info.height;
VDecParams.lpConvertInit->SizeDest.width=vParams.info.clip_info.width;
VDecParams.lpConvertInit->SizeDest.height=vParams.info.clip_info.height;

dataOut.SetVideoParameters(vidW, vidH, UMC::YV12);

BYTE * pOut = (BYTE*)malloc(vidW*2*vidH);
dataOut.SetDest(pOut);
dataOut.SetPitch(vidW*2);

int cc=0;
while (cc<150) {
umcRes = mpg2d->GetFrame(&dataIn, &dataOut);
if (umcRes!=UMC::UMC_OK && umcRes!=UMC::UMC_NOT_ENOUGH_DATA) break;

const Ipp8u* pSrcYUV[3] = {dataOut.m_lpDest[0], dataOut.m_lpDest[2], dataOut.m_lpDest[1],};
int stepYUV[3] = {dataOut.m_lPitch[0], dataOut.m_lPitch[2], dataOut.m_lPitch[1],};
IppiSize roiSize = {vidW, vidH};

//convert to YUY2
ippiYCrCb420ToYCbCr422_8u_P3C2R(pSrcYUV, stepYUV, pOut, vidW*2, roiSize );

if (umcRes==0) {
HANDLE fh2=CreateFile("YUY2.raw",GENERIC_WRITE,FILE_SHARE_READ,NULL,CREATE_ALWAYS,0,NULL);
WriteFile(fh2,pOut,vidW*2*vidH,&br,NULL);
CloseHandle(fh2);
}
cc++;

//set to next data in buffer, just for testing
vm_byte * tbuf=(vm_byte *)dataIn.GetDataPointer();
dataIn.SetBufferPointer(tbuf,BSIZE);
}

free(pOut);

mpg2d->Close();
delete mpg2d;
free(memBuf);

0 Kudos
Leonid_K_Intel
Employee
914 Views

The wrong thing is that you don't update data in input buffer.

I suppose DWORD br stands for byte_read. Modifications are like the following.

#define BUF_SIZE 1000000
#define MIN_SIZE 100000 // min size to start decode, otherwise read new portion

...
ReadFile(fh,memBuf, BUF_SIZE, &br, NULL);
...
dataIn.SetDataSize(br);
...

umcRes = Init...
if(umcRes != UMC::UMC_OK) error...
while(...

int datasize = dataIn.GetDataSize();
if(datasizeIpp8u* dataptr = dataIn.GetDataPointer();
ippsCopy_8u(dataptr, memBuf, datasize); // copy tail to the head of buffer
ReadFile(fh,memBuf+datasize, BUF_SIZE-datasize, &br, NULL); // read to remained space
if(br==0 && datasize < 16) break; // need more accurate EOF detection
dataIn.MoveDataPointer(memBuf - dataptr); // back to the start of the buffer
dataIn.SetDataSize(datasize + br); // remained + new read
GetFrame();
...
} // end of while

One more comment. You use pitch = vidW*2 in dataOut.SetPitch(vidW*2);
This method sets pitch for Y component, which is vidW. Other pithces are computed from the color format. Please check.

Also you can try ColorInit.FormatDest = UMC::; This way you will not have to convert ownself.

Regards

0 Kudos
Boris_V_1
Beginner
914 Views
Thanks for answer.
My code is similar, except I read whole buffer at start and
then just set dataIn pointer to next position in this buffer.
But the problem is still here. If MIN_SIZE is 100000 then frame is properly decoded,
but if it's 25000 only half of frame is decoded.
Not sure if this is bug or not, but decoder should return NOT_ENOUGH_DATA
until it has enough data to decode whole frame (it actually work like this, but then frame is not correctly decoded).
Anyway I'll just use bigger buffer for now.
Thanks for help.
0 Kudos
Reply