Media (Intel® Video Processing Library, Intel Media SDK)
Access community support with transcoding, decoding, and encoding in applications using media tools like Intel® oneAPI Video Processing Library and Intel® Media SDK
Announcements
The Intel Media SDK project is no longer active. For continued support and access to new features, Intel Media SDK users are encouraged to read the transition guide on upgrading from Intel® Media SDK to Intel® Video Processing Library (VPL), and to move to VPL as soon as possible.
For more information, see the VPL website.

Gray Video

Guilherme_L_2
Beginner
369 Views

Hi,

I'm new in mfxlib and i'm trying to make a test with camera capture.

I'm working in a video conference software and i want to change the encode process from x264 lib to Intel quick sync

So, i have this YUV420 frame with YV12 fourcc with a known format and i'm trying to encode and record an h264 video in a .mp4 container to see if the encode method is working fine, but the result is always a gray frame with some  weird squares and some colors appearing randomly.

I'm basically using vpp to convert the YV12 frame to NV12 and after that using encode to h264, i think the problem is in the parameters or in the output frame allocation, the idea is to keep it simple for tests and make otimization after.

I have learning by myself looking in sample_encode, simple_encode, simple_vpp and sample_vpp, so if you find some very strange things in my code, please let me know

some code:

Parameter iniciailization:

[cpp]

//encParameters

convertFrameRate(30, &(mfxEncParams->mfx.FrameInfo.FrameRateExtN), &(mfxEncParams->mfx.FrameInfo.FrameRateExtD));

mfxEncParams->mfx.CodecId = MFX_CODEC_AVC;
mfxEncParams->mfx.TargetUsage = MFX_TARGETUSAGE_BALANCED;
mfxEncParams->mfx.TargetKbps = 2000;
mfxEncParams->mfx.RateControlMethod = MFX_RATECONTROL_VBR;
mfxEncParams->mfx.FrameInfo.FourCC = MFX_FOURCC_NV12;
mfxEncParams->mfx.FrameInfo.ChromaFormat = MFX_CHROMAFORMAT_YUV420;
mfxEncParams->mfx.FrameInfo.PicStruct = MFX_PICSTRUCT_PROGRESSIVE;
mfxEncParams->mfx.FrameInfo.CropX = 0; 
mfxEncParams->mfx.FrameInfo.CropY = 0;
mfxEncParams->mfx.FrameInfo.CropW = 320;
mfxEncParams->mfx.FrameInfo.CropH = 240;
mfxEncParams->mfx.EncodedOrder = 0;

mfxEncParams->mfx.FrameInfo.Width = MSDK_ALIGN16(mfxEncParams->mfx.FrameInfo.CropW);
mfxEncParams->mfx.FrameInfo.Height = (MFX_PICSTRUCT_PROGRESSIVE == mfxEncParams->mfx.FrameInfo.PicStruct)? 
MSDK_ALIGN16(mfxEncParams->mfx.FrameInfo.CropH) : MSDK_ALIGN32(mfxEncParams->mfx.FrameInfo.CropH);

mfxEncParams->IOPattern = MFX_IOPATTERN_IN_SYSTEM_MEMORY;

//Vpp parameters

memset(mfxVppParams, 0, sizeof(mfxVideoParam));

/* input data */
mfxVppParams->vpp.In.FourCC = MFX_FOURCC_YV12;

mfxVppParams->vpp.In.CropX = 0;
mfxVppParams->vpp.In.CropY = 0;
mfxVppParams->vpp.In.CropW = 320;
mfxVppParams->vpp.In.CropH = 240;

// width must be a multiple of 16
// height must be a multiple of 16 in case of frame picture and
// a multiple of 32 in case of field picture
mfxVppParams->vpp.In.Width = MSDK_ALIGN16( mfxVppParams->vpp.In.CropW);
mfxVppParams->vpp.In.Height= MSDK_ALIGN16( mfxVppParams->vpp.In.CropH);

mfxVppParams->vpp.In.PicStruct = MFX_PICSTRUCT_PROGRESSIVE;

convertFrameRate(30, &(mfxVppParams->vpp.In.FrameRateExtN), &(mfxVppParams->vpp.In.FrameRateExtD));

/*output data */

memcpy(&mfxVppParams->vpp.Out, &mfxEncParams->mfx.FrameInfo, sizeof(mfxFrameInfo));

mfxVppParams->IOPattern = MFX_IOPATTERN_IN_SYSTEM_MEMORY | MFX_IOPATTERN_OUT_SYSTEM_MEMORY;

[/cpp]

Surface init:

[cpp]

mfxFrameSurface1* inSurface = new mfxFrameSurface1;
memset(inSurface, 0, sizeof(mfxFrameSurface1));

mfxFrameSurface1* outSurface = new mfxFrameSurface1;
memset(outSurface, 0, sizeof(mfxFrameSurface1));

memcpy(&(inSurface->Info), &(par->vpp.In), sizeof(mfxFrameInfo));

memcpy(&(outSurface->Info), &(par->vpp.Out), sizeof(mfxFrameInfo));

inSurface->Data.Y = inFrame;
inSurface->Data.U = inSurface->Data.Y + ysize;
inSurface->Data.V = inSurface->Data.U + uvsize;
inSurface->Data.Pitch = mfxEncParams->mfx.FrameInfo.Width;

intermediateSurface->Data.Y = new mfxU8[ 320 * 240 * 12 / 8];
intermediateSurface->Data.U = intermediateSurface->Data.Y + ysize;
intermediateSurface->Data.V = intermediateSurface->Data.U + 1;

mfxBS = new mfxBitstream();
mfxBS->MaxLength = par->mfx.BufferSizeInKB * 1000;
mfxBS->Data = new mfxU8[mfxBS->MaxLength];

[/cpp]

VPP process:

[cpp]

mfxStatus sts = MFX_ERR_NONE;
for (;;)
{
 sts = mfxVpp->RunFrameVPPAsync(inSurface, outSurface, NULL, vppSyncp); 
 if (MFX_ERR_NONE < sts && !(*vppSyncp)) // repeat the call if warning and no output
{
 if (MFX_WRN_DEVICE_BUSY == sts)
  Sleep(1); // wait if device is busy
}
else if (MFX_ERR_NONE < sts && *vppSyncp) 
{
 sts = MFX_ERR_NONE; // ignore warnings if output is available 
 break;
}
else
 break; // not a warning 


// process errors 
if (MFX_ERR_MORE_DATA == sts)
{
 printf("QuickSyncEncode : VPP : Error MFX_ERR_MORE_DATA\n");
}
else if (MFX_ERR_NONE != sts)
{
 printf("QuickSyncEncode : VPP : Error %d\n", sts);
}

[/cpp]

Encode Process:

[cpp]

sts = mfxEnc->EncodeFrameAsync(NULL, intermediateSurface, mfxBS, encSyncp);

if(sts == MFX_ERR_NONE)
{
 for (;;)
 { 
  if (MFX_ERR_NONE < sts && !(*encSyncp)) // Repeat the call if warning and no output
  {
   printf("!syncp\n");
   if (MFX_WRN_DEVICE_BUSY == sts)
   {
    printf("MFX_WRN_DEVICE_BUSY\n");
    Sleep(1); // Wait if device is busy, then repeat the same call 
   }
  }
  else if (MFX_ERR_NONE <= sts && *encSyncp) 
  {
   sts = MFX_ERR_NONE; // Ignore warnings if output is available
   break;
  }
  else if (MFX_ERR_NOT_ENOUGH_BUFFER == sts)
  {
   printf("MFX_ERR_NOT_ENOUGH_BUFFER\n");
   // Allocate more bitstream buffer memory here if needed...
   break; 
  }
  else
  {
   printf("waiting sync\n");
  }
 }

 if(MFX_ERR_NONE == sts)
 {
  sts = mfxSession->SyncOperation(*encSyncp, 60000); // Synchronize. Wait until encoded frame is ready
  *outFrame = mfxBS->Data;
  *outFrameSize = mfxBS->DataLength;
 }
}

[/cpp]

Regards,

Guilherme

P.S.:sorry by the poor english

0 Kudos
6 Replies
Guilherme_L_2
Beginner
369 Views

I made some progress, now my result video starts with that gray thing and after a while it's turn in a video with the expected aspect, but the video is too slow.

I had forgotten to set the frame pitch of the outSurface

Regards,

Guilherme

0 Kudos
dr_asik
Beginner
369 Views

The presence of a "Sleep(1)" could explain your issue. Sleep(1) doesn't really sleep 1 ms, it sleeps at least 1 ms; given the time granularity of the OS this can easily be 16ms or more. In any case, if your code is too slow, it's just a matter of taking time measures here and there to find out what is taking time.

0 Kudos
Guilherme_L_2
Beginner
369 Views

I don't know if it's becouse i'm working with low resolution video but the "Sleep(1)" is never executed. And i don't had slow code, i had a slow .mp4 video. I have solved that a minute ago, i was passing a 10 fps value in  place of 30 to the ffmpeg.

The real problem continues to be the gray frames in the begining of the video.

In one of this videos i can see some shadows of movements made in the video i have recorded, so i think that could be something related with a missed I-frame

0 Kudos
Guilherme_L_2
Beginner
369 Views

I have uploaded a sample

0 Kudos
Petter_L_Intel
Employee
369 Views

Hi,

have you verified that the pure H.264 elementary stream output from encode is correct before muxing into a container? The provided clip seem to have may issues, such as invalid or skewed timestamps and stream corruptions.

Looking at the sample code you provided I noticed that VPP initialization is missing setting the output color space format. Since you are copying the VPP "In" data to the "Out" data it means that the output colorspace (fourcc) will also be YV12. Output must be set to NV12. In any case, since your code seem to execute successfully I suspect you may be setting it anyway, but not in the code snippet you provided?

What is the variable "ysize" and "uvsize" set to in your code?

Regards,
Petter 

0 Kudos
Guilherme_L_2
Beginner
369 Views

Hi Petter,

I'm copying the mfx.FrameInfo to the vpp.Out, not the vpp.In, and that contain the output colorspace seted as NV12. (line 07 and 35 of the first block of code). The way i pasted the code here seems weird, but i made it this way because it is divided in different methods

ysize is the width * height, and the uvsize is width * height / 4.

Anyway, i solved the problem, and it's the gopsize parameter of the ffmpeg container settings are seted as 12 and the GopPicSize in the VPP Out data was not defined.

Thanks a lot.

Guilherme.

0 Kudos
Reply