Intel® Integrated Performance Primitives
Deliberate problems developing high-performance vision, signal, security, and storage applications.

BGR to YUV for H264Encoder encoder...

ntt73
Beginner
604 Views

I am trying to encode a series of BGR images into H264. The IPP 5.3 Reference Manual indicates that there are functions to convert BRG to YUV. Could you provide example code to pass the YUV output to an H264Encoder? I have several questions:

1. VideoData or MediaData has a member function SetBuffer(ipp8u * buffer, nSize) but the output of the ippiBGRToYCbCr420_8u_C3P3R is a Ipp8u* pDst[3. Is there a way to resolve the pointer differences?

2. The input parameter srcSet, is that just the line stride for a BGR image? So would it be imageWidth*3?

3. How should I setup the dstStep?

Many Thanks In Advance.

0 Kudos
4 Replies
Ying_H_Intel
Employee
604 Views

Hello,

There is simple H.264 encoder code at http://software.intel.com/en-us/articles/getting-started-with-intel-ipp-unified-media-classes-sample/. You may refer to it.

For example, when call ippiBGRToYCbCr420_8u_C3P3R as below

void ReadYUVData(char* strFilename,Ipp8u *cYUVData, int imgWidth, int imgHeight, int frameNumber)
{

// I did BGRtoYUV420 conversion here, please take care of these parameters: bytesSteps.

/* but please take care of the line stride for BGR image. It depends on howyour BGR image store in memory. If it is continous put BRG, no patch, then the stride is imageWidth*3, if it is 4-bytes align, it may be (imageWidth+3)/4*4*3.*/

FILE* infp = fopen(strFilename, "rb");
if(infp==NULL)
return ;

Ipp8u *cRGBData = ippsMalloc_8u(imgWidth*imgHeight*3);

Ipp8u *(pDstYVU[3]);
int pDstStepYVU[3] = {imgWidth, imgWidth/2, imgWidth/2};
int pSrcStep = imgWidth*3;
IppiSize srcSize={imgWidth, imgHeight};
IppStatus status;int i;
//fread(cYUVData,1,frameNumber*imgWidth*imgHeight*3/2, infp);
for (i=0; i{
fread(cRGBData,1,imgWidth*imgHeight*3, infp);
pDstYVU[0]=cYUVData + i*imgWidth*imgHeight*3/2;
pDstYVU[1]=pDstYVU[0] +imgWidth*imgHeight;
pDstYVU[2]=pDstYVU[1] +imgWidth*imgHeight/4;
status = ippiBGRToYCbCr420_8u_C3P3R(cRGBData, pSrcStep, pDstYVU, pDstStepYVU, srcSize);
}
fclose(infp);
}

Regards,

Ying

0 Kudos
ntt73
Beginner
604 Views

Hi Ying,

Thank you for your prompt response. I am a little confused about the initialization of the YVU databuffer. Based on your sample code I have created a method for my custom application. Could you provide some guidance on how to set the YUV data into a VideoData?

[cpp]    /**
     * Reformats a BGR 4:4:4 raster image into a 3 planes YUV 4:2:0.
     *
     * @param cBGRData the input raster in BGR format
     * @param cYUVData the output planar image
     * @param stride the line stride in the input raster
     * @param width the width of the input raster
     * @para height the height of the input raster
     * 
     * @return IppStatus returns ippStsNoErr if no error or 
     *                   ippStsNullPtrErr if input/outputs are null,
     *                   ippStsSizeErr invalid ROI size,
     *                   ippStsDoubleSize ROI not a multiple of 2.
     */
    IppStatus reformatBGRToYUV(const Ipp8u * cBGRData, Ipp8u ** cYUVData, int stride, int width, int height) {

        IppStatus status;

        IppiSize srcSize={width, height};
        int srcStep = stride;                

        cYUVData[0] = new Ipp8u[width*height*3/2]; // Y
        cYUVData[1] = new Ipp8u[width*height];     // V
        cYUVData[2] = new Ipp8u[width*height/4];   // U     
        int pDstStepYVU[3] = {width,  width/2,  width/2};

        status = ippiBGRToYCbCr420_8u_C3P3R(cBGRData, srcStep, cYUVData, pDstStepYVU, srcSize);

        return status; 
    }[/cpp]

the calling operation would like this?

[cpp]Ipp8u * yuvData[3];
    int imgWidth = 640;
    int imgHeight = 480;
    int stride = imgWidth*3;
    IppStatus status = reformatBGRToYUV(cRGBData, yuvData, stride, imgWidth, imgHeight);
    UMC::VideoData videoDataIn;

    // now set the yuvData as an input to the H264 Encoder
    videoDataIn.Init(imgWidth,imgHeight,UMC::YV12,8);
    videoDataIn.SetPlanePointer(yuvData[1],1);  // Y
    videoDataIn.SetPlanePointer(yuvData[3],2);  // V
    videoDataIn.SetPlanePointer(yuvData[2],3);  // U
    videoDataIn.SetPlaneSampleSize(imgWidth*imgHeight*3/2,1);
    videoDataIn.SetPlaneSampleSize(imgWidth*imgHeight/4,2);
    videoDataIn.SetPlaneSampleSize(imgWidth*imgHeight,3);   [/cpp]

0 Kudos
Ying_H_Intel
Employee
604 Views

Hello,

These setting looks OK. ButI havea little worry about the YUV buffer. you are using seperate memory to store them,

cYUVData[0]=newIpp8u[width*height*3/2];//Y

cYUVData[1]=newIpp8u[width*height];//V

cYUVData[2]=newIpp8u[width*height/4];//U

andnot sure whendelete them. As the below simpleencoder.cpp show, the encode process was completed in a loop. You may need to set DataIn'sall plane pointers everytimes.A simple andsafeway is to use a piece of consecutive memory for YUVData and set the plane pointer:

yuvData[0]=cYUVData + i*imgWidth*imgHeight*3/2;
yuvData[1]=yuvData[0] +imgWidth*imgHeight;
yuvData[2]=yuvData[1] +imgWidth*imgHeight/4;

Simpleencoder.cpp:

Ipp8u *cYUVData = ippsMalloc_8u(MAXYUVSIZE);

DataIn.Init(imgWidth,imgHeight,UMC::YV12,8);
DataIn.SetBufferPointer(cYUVData,imgWidth*imgHeight*3/2);
DataIn.SetDataSize(imgWidth*imgHeight*3/2);

DataOut.SetBufferPointer(cVideoData,MAXVIDEOSIZE);

VideoDataSize=0;
int nEncodedFrames=0;
while ( nEncodedFrames < frameNumber)
{
status = H264Encoder.GetFrame(&DataIn, &DataOut);

if (status == UMC::UMC_OK)
{
nEncodedFrames++;
VideoDataSize+=DataOut.GetDataSize();
DataOut.MoveDataPointer(DataOut.GetDataSize());
cYUVData+=imgWidth*imgHeight*3/2;
DataIn.SetBufferPointer(cYUVData,imgWidth*imgHeight*3/2);
DataIn.SetDataSize(imgWidth*imgHeight*3/2);
}

}
return;

}

Regards,

Ying

PS. about the piece of code:

videoDataIn.SetPlanePointer(yuvData[1],1);//Y

videoDataIn.SetPlanePointer(yuvData[3],2);//V

videoDataIn.SetPlanePointer(yuvData[2],3);//U

videoDataIn.SetPlaneSampleSize(imgWidth*imgHeight*3/2,1);

  • videoDataIn.SetPlaneSampleSize(imgWidth*imgHeight/4,2);
  • videoDataIn.SetPlaneSampleSize(imgWidth*imgHeight,3);
  • should they be yuvData[0], yuvData[2], yuvData[1]. and the SetPlaneSampleSize are imgWidth*imgHeight,imgWidth*imgHeight/4,imgWidth*imgHeight/4?

    0 Kudos
    ntt73
    Beginner
    604 Views

    Hi Ying,

    The encoder code that I am writing is being called from a java application (via JNI) so the BRG buffer is passed in to a C++ encoding function. This means I first have to convert the BRG buffer to YUV using the reformat function I have created. The yuvData array is created each time my reformat + encoder function is called, and deleted on exit of the encoder function. Do you think I could reuse the same YUV buffer for each call by encapsulating yuvData array as a member variable of a class? This would also mean that I would refactor the reformat + encoding operation in a class.

    Regarding your comments in the Post Script, yes you're right. I haven't tested the encoder function from end-to-end with the java application so thank you for catching the errors.

    Thanks again.

    0 Kudos
    Reply