Media (Intel® Video Processing Library, Intel Media SDK)
Access community support with transcoding, decoding, and encoding in applications using media tools like Intel® oneAPI Video Processing Library and Intel® Media SDK
Announcements
The Intel Media SDK project is no longer active. For continued support and access to new features, Intel Media SDK users are encouraged to read the transition guide on upgrading from Intel® Media SDK to Intel® Video Processing Library (VPL), and to move to VPL as soon as possible.
For more information, see the VPL website.

QuickSync AVC encoder crash access violation

sverdlov__andrey
1,306 Views

Hello

I'm new in QuickSync and Intel Media SDK. So, I've checked examples and tutorials. I need to encode YUV420P input frames with AVC encoder with HW acceleration support. As a start point I took the simple_encode sample(the one where system memory used for encoding surfaces). I've inited all the parameters, but the crash(access violation reading location) occurs after encoding starts(about 730-760 msec frame timestamp). Tried on different videos. I've checked the FFMPEG implementation, but still couldn't fix the crash. Also, I've tried to use aligned memory for surface data and separate bitstream instances for each encode call and copy the frame data and copy the packet data to standalone instances. But the crash still occurs. I'm stuck wit this and not sure how to fix this. Also, I've checked the behavior on different Intel GPU with same result. Intel Media trace didn't cleared anything. My hardware: Intel Core i7-6700K(Intel HD Graphics 530). 1 monitor connected to NVidia GPU, Intel HD Graphics is not connected to monitor(this is a valid use case for DX11, as I can see). The D3D9 initialization fails, but D3D11 works fine. Please, check my initialization and encode calls below.

Also, I have to note, GetFreeSurface method is always returns first surface(idx == 0) and output packets have wrong PTS/DTS(usually 0). See the logs below. The frames to Encode method are coming in presentation order.

mfxStatus LQuickSyncEncoder::InitEncoder()
{
   mfxVersion version;
   version.Major = 1;
   version.Minor = 0;
   // Try to use D3D9 first. It has limitations why we can't use it(unavailable without connected monitor).
   // So, in case we failed to use D3D9, we will request to use D3D11.
   mfxStatus result = session.Init(MFX_IMPL_HARDWARE_ANY | MFX_IMPL_VIA_D3D9, &version);

   if (result != MFX_ERR_NONE) {
      LFTRACE("Failed to init MFX_IMPL_HARDWARE_ANY | MFX_IMPL_VIA_D3D9 accelerated session.");
      // MFX_IMPL_HARDWARE_ANY will utilize HW acceleration even if Intel HD Graphics device is not associated with primary adapter.
      // This is common case if system has high-performance GPU and Intel CPU with embedded Intel HD Graphics GPU.
      // MFX_IMPL_VIA_D3D11 is required, because Direct3D11 allows to use hardware acceleration even without connected monitor.
      result = session.Init(MFX_IMPL_HARDWARE_ANY | MFX_IMPL_VIA_D3D11, &version);
      if (result != MFX_ERR_NONE) {
         // Failed to init hardware accelerated session.
         // We're not intrested in software implementation, because we can use x264 or WMF encoders instead. So, don't try to init IMPL_AUTO or IMPL_SOFTWARE session here.
         LFTRACE("Failed to init MFX_IMPL_HARDWARE_ANY | MFX_IMPL_VIA_D3D11 accelerated session.");
         return result;
      } else LFTRACE("Succeeded to init MFX_IMPL_HARDWARE_ANY | MFX_IMPL_VIA_D3D11 accelerated session.");
   } else LFTRACE("Succeeded to init MFX_IMPL_HARDWARE_ANY | MFX_IMPL_VIA_D3D9 accelerated session.");

   // SandyBridge(2nd Gen Intel Core) is required for QuickSync usage.
   // TODO: we can support QuickSync started from desired CPU. For example, we can select Ivy Bridge(3rd generation Intel Core) as the first CPU that we will support for IntelQuickSync.
   // This can be done with platform request from the session.
   // Why we might need this ? Possibly first generations of Intel QuickSync produces low quality with approx same performance as CPU. So, we don't get any advantages using it.
   // See: session.QueryPlatform method

   // Set the packets timebase
   packet.rTimebase = MSEC_TIMEBASE;

   // Set required video parameters for encode
   mfxVideoParam mfxEncParams;
   ZeroMemory(&mfxEncParams, sizeof(mfxEncParams));

   mfxEncParams.mfx.CodecId = MFX_CODEC_AVC; // TODO: QuickSync is also supports MPEG-2, VP8, VP9 and H.265/HEVC encoders.
   mfxEncParams.mfx.CodecProfile = MapH264ProfileToMFXProfile(h264Profile);
   mfxEncParams.mfx.CodecLevel = MapH264LevelToMFXLevel(h264Level);
   mfxEncParams.mfx.TargetUsage = MFX_TARGETUSAGE_BALANCED; // TODO: we can support "fast speed" here
   if (videoQuality.bUseQualityScale) {
      // Intelligent Constant Quality (ICQ) bitrate control algorithm. It is value in the 1…51 range, where 1 corresponds the best quality.
      const uint16_t arICQQuality[] = { 51, 38, 25, 12, 1 };
      mfxEncParams.mfx.ICQQuality = arICQQuality[videoQuality.qualityScale];
      mfxEncParams.mfx.RateControlMethod = MFX_RATECONTROL_ICQ;
   } else {
      mfxEncParams.mfx.TargetKbps = uint16_t(videoQuality.uBitrate);
      mfxEncParams.mfx.RateControlMethod = MFX_RATECONTROL_VBR;
   }
   const LRational rFramerate = LDoubleToRational(1.0 / vf.vfps, LDEFAULT_FRAME_RATE_BASE); // TODO: need to handle variable framerate ?
   mfxEncParams.mfx.FrameInfo.FrameRateExtN = rFramerate.den; // Assign den to num, because we used 1.0 / fps
   mfxEncParams.mfx.FrameInfo.FrameRateExtD = rFramerate.num;
   mfxEncParams.mfx.FrameInfo.FourCC = MFX_FOURCC_NV12;
   mfxEncParams.mfx.FrameInfo.ChromaFormat = MFX_CHROMAFORMAT_YUV420;
   mfxEncParams.mfx.FrameInfo.PicStruct = MFX_PICSTRUCT_PROGRESSIVE;
   mfxEncParams.mfx.FrameInfo.CropX = 0;
   mfxEncParams.mfx.FrameInfo.CropY = 0;
   mfxEncParams.mfx.FrameInfo.CropW = uint16_t(vf.GetWidthPixels());
   mfxEncParams.mfx.FrameInfo.CropH = uint16_t(vf.GetHeightPixels());
   // Width must be a multiple of 16.
   // Height must be a multiple of 16 in case of frame picture and a multiple of 32 in case of field picture.
   mfxEncParams.mfx.FrameInfo.Width = uint16_t(MSDK_ALIGN16((vf.GetWidthPixels())));
   mfxEncParams.mfx.FrameInfo.Height = uint16_t((MFX_PICSTRUCT_PROGRESSIVE == mfxEncParams.mfx.FrameInfo.PicStruct) ? MSDK_ALIGN16(vf.GetHeightPixels()) : MSDK_ALIGN32(vf.GetHeightPixels()));

   const LRational rPixelAspect = vf.PixelAspectRatioGet();

   mfxEncParams.mfx.FrameInfo.AspectRatioW = uint16_t(rPixelAspect.num);
   mfxEncParams.mfx.FrameInfo.AspectRatioH = uint16_t(rPixelAspect.den);
   mfxEncParams.mfx.FrameInfo.BitDepthLuma = 8;
   mfxEncParams.mfx.FrameInfo.BitDepthChroma = 8;

   mfxEncParams.IOPattern = MFX_IOPATTERN_IN_SYSTEM_MEMORY; // TODO: use GPU memory if possible
   //mfxEncParams.AsyncDepth = 1; // TODO: find a correct value

   pEncoder.Assign(new MFXVideoENCODE(session));
   // Validate video encode parameters. The validation result is written to same structure.
   // MFX_WRN_INCOMPATIBLE_VIDEO_PARAM is returned if some of the video parameters are not supported,
   // instead the encoder will select suitable parameters closest matching the requested configuration
   result = pEncoder->Query(&mfxEncParams, &mfxEncParams);
   if ((result != MFX_ERR_NONE) && (result != MFX_WRN_INCOMPATIBLE_VIDEO_PARAM)) {
      LFTRACE("Can't validate video encode parameters");
      return result;
   }

   // Query number of required surfaces for encoder.
   mfxFrameAllocRequest EncRequest;
   ZeroMemory(&EncRequest, sizeof(EncRequest));

   //#define WILL_WRITE 0x2000
   //EncRequest.Type |= /*WILL_WRITE*/MFX_MEMTYPE_SYSTEM_MEMORY | MFX_MEMTYPE_FROM_ENC; // This line is only required for Windows DirectX11 to ensure that surfaces can be written to by the application

   result = pEncoder->QueryIOSurf(&mfxEncParams, &EncRequest);
   if (result != MFX_ERR_NONE) {
      LFTRACE("pEncoder->QueryIOSurf Failed");
      return result;
   }

   const uint16_t uEncSurfaceCount = EncRequest.NumFrameSuggested;
   LFTRACEF(TEXT("Encode surfaces requested [%d]"), uEncSurfaceCount);

   // Allocate surfaces for encoder:
   // Width and height of buffer must be aligned, a multiple of 32.
   // Frame surface array keeps pointers all surface planes and general frame info.
   const uint16_t uRequestWidth = (uint16_t)MSDK_ALIGN32(EncRequest.Info.Width);
   const uint16_t uRequestHeight = (uint16_t)MSDK_ALIGN32(EncRequest.Info.Height);
   const uint8_t uBitsPerPixel = 16; // NV12 format is a 12 bits per pixel format.
   const uint32_t uSurfaceSize = uint32_t(uRequestWidth * uRequestHeight * uBitsPerPixel / 8 * 1.5);

   sbaSurfaceBuffers.SetArrayCapacityLarge(uSurfaceSize * uEncSurfaceCount);
   if (!sbaSurfaceBuffers.IsValid()) {
      LFTRACE("Failed to allocate memory for surface buffers");
      return MFX_ERR_MEMORY_ALLOC;
   }
   sbaSurfaceBuffers.SetSize(uSurfaceSize * uEncSurfaceCount);

   // Allocate surface headers (mfxFrameSurface1) for encoder.
   saSurfaces.SetArrayCapacity(uEncSurfaceCount);
   saSurfaces.SetSize(uEncSurfaceCount);

   //uint8_t* pMem = (uint8_t*)_aligned_malloc(uSurfaceSize * uEncSurfaceCount, 32);
   //ZeroMemory(pMem, uSurfaceSize * uEncSurfaceCount);

   for (size_t i = 0; i < uEncSurfaceCount; i++) {
      saSurfaces.Assign(new mfxFrameSurface1);
      ZeroMemory(saSurfaces.get(), sizeof(mfxFrameSurface1));
      memcpy(&(saSurfaces->Info), &(mfxEncParams.mfx.FrameInfo), sizeof(mfxFrameInfo));

      saSurfaces->Data.Y = &sbaSurfaceBuffers/*pMem*/[uSurfaceSize * i];
      saSurfaces->Data.U = saSurfaces->Data.Y + uRequestWidth * uRequestHeight;
      // We're using NV12 format here. U and V values are interleaved. So, V address always will be U + 1
      saSurfaces->Data.V = saSurfaces->Data.U + 1;
      saSurfaces->Data.PitchLow = uRequestWidth;
   }

   // Initialize the Media SDK encoder.
   result = pEncoder->Init(&mfxEncParams);
   // Ignore the partial acceleration warning
   if (result == MFX_WRN_PARTIAL_ACCELERATION) {
      LFTRACE("Encoder will work with partial HW acceleration");
      result = MFX_ERR_NONE;
   }
   if (result != MFX_ERR_NONE) {
      LFTRACE("Failed to init the encoder");
      return result;
   }

   // Retrieve video parameters selected by encoder.
   mfxVideoParam selectedParameters;
   ZeroMemory(&selectedParameters, sizeof(selectedParameters));

   mfxExtCodingOptionSPSPPS mfxSPSPPS;
   ZeroMemory(&mfxSPSPPS, sizeof(mfxSPSPPS));

   mfxSPSPPS.Header.BufferId = MFX_EXTBUFF_CODING_OPTION_SPSPPS;
   mfxSPSPPS.Header.BufferSz = sizeof(mfxExtCodingOptionSPSPPS);
   LSizedByteArray saSPS(128);
   LSizedByteArray saPPS(128);
   mfxSPSPPS.PPSBuffer = saPPS.get();
   mfxSPSPPS.PPSBufSize = uint16_t(saPPS.GetSize());
   mfxSPSPPS.SPSBuffer = saSPS.get();
   mfxSPSPPS.SPSBufSize = uint16_t(saSPS.GetSize());

   // We need to get SPS and PPS data only
   mfxExtBuffer* pExtBuffers[] = { (mfxExtBuffer*)&mfxSPSPPS };
   selectedParameters.ExtParam = pExtBuffers;
   selectedParameters.NumExtParam = lenof(pExtBuffers);

   result = pEncoder->GetVideoParam(&selectedParameters);
   if (result != MFX_ERR_NONE) {
      LFTRACE("Failed to GetVideoParam of the encoder");
      return result;
   }

   // Get ExtraData
   const size_t uExtradataSize = mfxSPSPPS.SPSBufSize + mfxSPSPPS.PPSBufSize;
   if (uExtradataSize == 0) {
      LFTRACE("Extradata has wrong size.");
      return MFX_ERR_INCOMPATIBLE_VIDEO_PARAM;
   }

   saExtradata.SetArrayCapacity(uExtradataSize);
   saExtradata.SetSize(uExtradataSize);
   memcpy(saExtradata.get(), mfxSPSPPS.SPSBuffer, mfxSPSPPS.SPSBufSize);
   memcpy(saExtradata.get() + mfxSPSPPS.SPSBufSize, mfxSPSPPS.PPSBuffer, mfxSPSPPS.PPSBufSize);

   // Prepare Media SDK bit stream buffer
   ZeroMemory(&mfxBS, sizeof(mfxBS));
   mfxBS.MaxLength = selectedParameters.mfx.BufferSizeInKB * 1000; // selectedParameters.mfx.BufferSizeInKB * 1000 - this is copied from the sample
   sbaBitstreamData.SetArrayCapacityLarge(mfxBS.MaxLength);
   if (!sbaBitstreamData.IsValid()) {
      LFTRACE("Failed to allocate mempoy for bitstream buffer");
      return MFX_ERR_MEMORY_ALLOC;
   }
   sbaBitstreamData.SetSize(mfxBS.MaxLength);

   mfxBS.Data = sbaBitstreamData.get();

   return result;
}

bool LQuickSyncEncoder::Encode(const LVideoFrame& frm)
{
   LFTRACE();

   if (packet.DataIsValid()) {
      LFDEBUG("Previous packet hasn't been read. You must call GetNextPacket() after every Encode()");
      return false;
   }

   const videoposition_t vpCurFrame = frm.GetPosition();
   if (vpCurFrame <= vpPreviousFrame) {
      LFTRACE("Current pts is the same or less then the previous pts. Drop frame.");
      return true;
   }

   // Main encoding method
   mfxFrameSurface1* pSurface = GetFreeSurface(saSurfaces); // Find free frame surface
   if (pSurface == nullptr) {
      LFDEBUG("No free enc surface available");
      return false;
   }

   LFTRACEF(TEXT("Current frame pos = [%d]"), vpCurFrame);

   mfxStatus status = LoadFrameData(pSurface, frm);

   if (status != MFX_ERR_NONE) {
      LFTRACE("Unable to load frame data to enc surface");
      return false;
   }

   //LPtr<mfxBitstream> pBS(new mfxBitstream);
   //ZeroMemory(pBS.get(), sizeof(mfxBitstream));
   //LSizedByteArray BSData;
   //BSData.SetArrayCapacityLarge(mfxBS.MaxLength);
   //BSData.IsValid();
   //BSData.SetSize(mfxBS.MaxLength);
   //ZeroMemory(BSData.get(), mfxBS.MaxLength);
   //pBS->Data = BSData.get();
   //pBS->MaxLength = mfxBS.MaxLength;

   while (true) {
      // Encode a frame asychronously (returns immediately)
      status = pEncoder->EncodeFrameAsync(NULL, pSurface, &mfxBS/*pBS.get()*/, &syncPoint);

      if ((status > MFX_ERR_NONE) && (syncPoint == NULL)) { // Repeat the call if warning and no output
         if (MFX_WRN_DEVICE_BUSY == status) Sleep(5); // Wait if device is busy, then repeat the same call
      } else if ((status > MFX_ERR_NONE) && (syncPoint != NULL)) {
         status = MFX_ERR_NONE; // Ignore warnings if output is available
         break;
      } else if (MFX_ERR_NOT_ENOUGH_BUFFER == status) {
         // TODO: Allocate more bitstream buffer memory here if needed...
         LFDEBUG("Encoder requested more memory");
         break;
      } else break;
   }

   if (MFX_ERR_NONE == status) {
      status = session.SyncOperation(syncPoint, 60000); // Synchronize. Wait until encoded frame is ready
      if (status != MFX_ERR_NONE) {
         LFDEBUG("Failed to Wait until encoded frame is ready");
         return false;
      }

      status = WritePacketData(packet, mfxBS/**pBS.get()*/);
      if (status != MFX_ERR_NONE) {
         LFDEBUG("Failed to write packet data");
         return false;
      }
   }

   vpPreviousFrame = vpCurFrame;

   // MFX_ERR_MORE_DATA is a valid case: encoder works in async mode and asked more frames to generate a packet
   return (status == MFX_ERR_MORE_DATA) || (status == MFX_ERR_NONE);
}


mfxStatus LQuickSyncEncoder::LoadFrameData(mfxFrameSurface1* pSurface, const LVideoFrame& frm)
{
   // Get pointers to frames data planes
   LImageScanlineConstIterator siY(frm.GetImageBuffer());
   LImageScanlineIteratorU siU(frm.GetImageBuffer());
   LImageScanlineIteratorV siV(frm.GetImageBuffer());

   // Copy source frames to the mxf structures
   const size_t uHeight = size_t(frm.GetImageBuffer().GetFormat().iHeight);

   if ((uHeight / 2) > 2048) {
      LFDEBUG("Max supported U and V planes height is 2048");
      return MFX_ERR_UNSUPPORTED;
   }

   //mfxFrameInfo* pInfo = &pSurface->Info;
   mfxFrameData* pData = &pSurface->Data;
   pData->TimeStamp = LRescaleRational(frm.GetPosition(), MSEC_TIMEBASE, MFX_TIMEBASE);

   // Copy Y plane
   memcpy(pData->Y, (uint8_t*)siY.Get(), pData->PitchLow * uHeight);

   // Copy UV planes data
   size_t uOffset = 0;
   const size_t uUVWidthBytes = frm.GetImageBuffer().GetFormat().GetWidthBytesPlanarU();
   
   while(siU.IsValid() && siV.IsValid()) {
      const uint8_t* pU = (uint8_t*) siU.Get();
      const uint8_t* pV = (uint8_t*) siV.Get();

      for (size_t i = 0; i < uUVWidthBytes; i++) {
         pData->U[uOffset] = pU; // U byte
         pData->V[uOffset] = pV; // V byte
         uOffset += 2;
      }
      siU.Next();
      siV.Next();
   }

   return MFX_ERR_NONE;
}

mfxStatus LQuickSyncEncoder::WritePacketData(LMediaPacket& _packet, mfxBitstream& _mfxBitstream)
{
   // Reinit the bitstream
   //const uint32_t umfxBitstreamMaxData = _mfxBitstream.MaxLength;
   //ZeroMemory(&_mfxBitstream, sizeof(_mfxBitstream));
   //_mfxBitstream.MaxLength = umfxBitstreamMaxData;
   //mfxBS.Data = sbaBitstreamData.get();

   // Use copy of the bitstream data
   //LSizedByteArray saNewPacketData(size_t(_mfxBitstream.DataLength));
   //memcpy(saNewPacketData.get(), _mfxBitstream.Data + _mfxBitstream.DataOffset, _mfxBitstream.DataLength);
   //_packet.DataAssignArray(saNewPacketData);
   // Line below: use bitstream data itself
   _packet.DataSetExternalReference(_mfxBitstream.Data + _mfxBitstream.DataOffset, size_t(_mfxBitstream.DataLength)); // Set the frame data
   // Set PTS
   if (_mfxBitstream.TimeStamp == MFX_TIMESTAMP_UNKNOWN) _packet.pts = LVID_NO_TIMESTAMP_VALUE;
   else _packet.pts = LRescaleRational(_mfxBitstream.TimeStamp, MFX_TIMEBASE, MSEC_TIMEBASE); // PTS is get from Frame passed to QuickSync
   // Set DTS
   if (_mfxBitstream.DecodeTimeStamp == MFX_TIMESTAMP_UNKNOWN) _packet.dts = LVID_NO_TIMESTAMP_VALUE;
   else _packet.dts = LRescaleRational(_mfxBitstream.DecodeTimeStamp, MFX_TIMEBASE, MSEC_TIMEBASE); // DTS is set by QuickSync
   // Reset flags value and set valid value for this packet
   _packet.uFlags = 0;
   if ((_mfxBitstream.FrameType & MFX_FRAMETYPE_IDR) || (_mfxBitstream.FrameType & MFX_FRAMETYPE_xIDR)) _packet.uFlags |= LMEDIA_PACKET_I_FRAME;

   if ((_mfxBitstream.FrameType & MFX_FRAMETYPE_I) || (_mfxBitstream.FrameType & MFX_FRAMETYPE_xI)) _packet.uFlags |= LMEDIA_PACKET_I_FRAME;
   else if ((_mfxBitstream.FrameType & MFX_FRAMETYPE_P) || (_mfxBitstream.FrameType & MFX_FRAMETYPE_xP)) _packet.uFlags = LMEDIA_PACKET_P_FRAME;
   else if ((_mfxBitstream.FrameType & MFX_FRAMETYPE_B) || (_mfxBitstream.FrameType & MFX_FRAMETYPE_xB)) _packet.uFlags = LMEDIA_PACKET_B_FRAME;
   
   LFTRACEF(TEXT("Packet PTS=%lld; DTS=%lld"), _packet.pts, _packet.dts);
   _mfxBitstream.DataLength = 0;
   _mfxBitstream.DataOffset = 0;
   return MFX_ERR_NONE;
}

mfxFrameSurface1* LQuickSyncEncoder::GetFreeSurface(const LSizedArray<LPtr<mfxFrameSurface1> >& _saSurfaces)
{
   for (size_t i = 0; i < _saSurfaces.GetSize(); i++) {
      if (_saSurfaces->Data.Locked == 0) {
         LFTRACEF(TEXT("Free surface idx = %d"), int(i));
         return _saSurfaces.get();
      }
   }
   return nullptr;
}

Input video 1:

00:00:12.438  MAIN  LQuickSyncEncoder::Encode

00:00:12.438  MAIN  LQuickSyncEncoder::GetFreeSurface Free surface idx = 0

00:00:12.438  MAIN  LQuickSyncEncoder::Encode Current frame pos = [700]

00:00:12.454  MAIN  LQuickSyncEncoder::WritePacketData Packet PTS=0; DTS=0

00:00:12.454  MAIN  LQuickSyncEncoder::GetNextPacket

00:00:12.454  MAIN  LMultiplexerMP4<class LOutputStreamFileNotify>::WritePacket

00:00:12.454  MAIN  LMultiplexerMPEG4Base<class LOutputStreamFileNotify>::WritePacketInternal

00:00:12.469  MAIN  LQuickSyncEncoder::GetNextPacket

00:00:12.469  MAIN  LImageBufferCopy

00:00:12.469  MAIN  LImageBufferCopy

00:00:12.469  MAIN  LQuickSyncEncoder::Encode

00:00:12.485  MAIN  LQuickSyncEncoder::GetFreeSurface Free surface idx = 0

00:00:12.485  MAIN  LQuickSyncEncoder::Encode Current frame pos = [734]

Exception thrown at 0x124E5296 (libmfxhw32.dll) in ***.exe: 0xC0000005: Access violation reading location 0x20BB4FA4.

Input video 2:

00:00:42.812  MAIN  LQuickSyncEncoder::Encode

00:00:42.812  MAIN  LQuickSyncEncoder::GetFreeSurface Free surface idx = 0

00:00:42.828  MAIN  LQuickSyncEncoder::Encode Current frame pos = [760]

Exception thrown at 0x115E5296 (libmfxhw32.dll) in ***.exe: 0xC0000005: Access violation reading location 0x1EA4DAA8.

Intel Media SDK calls trace:

2828 2019-10-22 15:32:2:735     bs.reserved[]={ 0, 0, 0, 0, 0, 0 }
2828 2019-10-22 15:32:2:735     bs.DecodeTimeStamp=0
2828 2019-10-22 15:32:2:735     bs.TimeStamp=0
2828 2019-10-22 15:32:2:735     bs.Data=0000000024BC6064
2828 2019-10-22 15:32:2:735     bs.DataOffset=0
2828 2019-10-22 15:32:2:735     bs.DataLength=0
2828 2019-10-22 15:32:2:735     bs.MaxLength=3264000
2828 2019-10-22 15:32:2:735     bs.PicStruct=1
2828 2019-10-22 15:32:2:735     bs.FrameType=4
2828 2019-10-22 15:32:2:735     bs.DataFlag=0
2828 2019-10-22 15:32:2:735     bs.reserved2=0
2828 2019-10-22 15:32:2:735     mfxSyncPoint* syncp=00005800
2828 2019-10-22 15:32:2:735 function: MFXVideoENCODE_EncodeFrameAsync(0.0245 msec, status=MFX_ERR_NONE) - 


2828 2019-10-22 15:32:2:735 function: MFXVideoCORE_SyncOperation(mfxSession session=00743BE0, mfxSyncPoint syncp=00005800, mfxU32 wait=60000) +
2828 2019-10-22 15:32:2:736     mfxSession session=05E45DCC
2828 2019-10-22 15:32:2:736     mfxSyncPoint* syncp=00005800
2828 2019-10-22 15:32:2:736     mfxU32 wait=60000
2828 2019-10-22 15:32:2:736 >> MFXVideoCORE_SyncOperation called
2828 2019-10-22 15:32:2:736     mfxSession session=05E45DCC
2828 2019-10-22 15:32:2:736     mfxSyncPoint* syncp=00005800
2828 2019-10-22 15:32:2:736     mfxU32 wait=60000
2828 2019-10-22 15:32:2:736 function: MFXVideoCORE_SyncOperation(0.4659 msec, status=MFX_ERR_DEVICE_FAILED) - 


2828 2019-10-22 15:32:4:3 function: MFXVideoENCODE_EncodeFrameAsync(mfxSession session=00743BE0, mfxEncodeCtrl *ctrl=00000000, mfxFrameSurface1 *surface=00000000, mfxBitstream *bs=007435BE, mfxSyncPoint *syncp=0074361A) +
2828 2019-10-22 15:32:4:3     mfxSession session=05E45DCC
2828 2019-10-22 15:32:4:3     bs.EncryptedData=00000000
2828 2019-10-22 15:32:4:3     bs.NumExtParam=0
2828 2019-10-22 15:32:4:3     bs.ExtParam=00000000

2828 2019-10-22 15:32:4:3     bs.reserved[]={ 0, 0, 0, 0, 0, 0 }
2828 2019-10-22 15:32:4:3     bs.DecodeTimeStamp=0
2828 2019-10-22 15:32:4:3     bs.TimeStamp=0
2828 2019-10-22 15:32:4:3     bs.Data=0000000024BC6064
2828 2019-10-22 15:32:4:3     bs.DataOffset=0
2828 2019-10-22 15:32:4:3     bs.DataLength=0
2828 2019-10-22 15:32:4:3     bs.MaxLength=3264000
2828 2019-10-22 15:32:4:3     bs.PicStruct=1
2828 2019-10-22 15:32:4:3     bs.FrameType=4
2828 2019-10-22 15:32:4:3     bs.DataFlag=0
2828 2019-10-22 15:32:4:3     bs.reserved2=0
2828 2019-10-22 15:32:4:3     mfxSyncPoint* syncp=00005800
2828 2019-10-22 15:32:4:3 >> MFXVideoENCODE_EncodeFrameAsync called
2828 2019-10-22 15:32:4:3     mfxSession session=05E45DCC
2828 2019-10-22 15:32:4:3     bs.EncryptedData=00000000
2828 2019-10-22 15:32:4:3     bs.NumExtParam=0
2828 2019-10-22 15:32:4:3     bs.ExtParam=00000000

2828 2019-10-22 15:32:4:3     bs.reserved[]={ 0, 0, 0, 0, 0, 0 }
2828 2019-10-22 15:32:4:3     bs.DecodeTimeStamp=0
2828 2019-10-22 15:32:4:3     bs.TimeStamp=0
2828 2019-10-22 15:32:4:3     bs.Data=0000000024BC6064
2828 2019-10-22 15:32:4:3     bs.DataOffset=0
2828 2019-10-22 15:32:4:3     bs.DataLength=0
2828 2019-10-22 15:32:4:3     bs.MaxLength=3264000
2828 2019-10-22 15:32:4:3     bs.PicStruct=1
2828 2019-10-22 15:32:4:3     bs.FrameType=4
2828 2019-10-22 15:32:4:3     bs.DataFlag=0
2828 2019-10-22 15:32:4:3     bs.reserved2=0
2828 2019-10-22 15:32:4:3     mfxSyncPoint* syncp=00000000
2828 2019-10-22 15:32:4:4 function: MFXVideoENCODE_EncodeFrameAsync(0.0053 msec, status=MFX_ERR_DEVICE_FAILED) - 

 

Update: the memory align is set to 1 outside of this code(#pragma pack 1). If restore the default value(8) the crash is not occurs. Similar behavior is sample_encode example. Doesn it means the Intel Media SDK strucures are sensitive to memory align ? Can you confirm default align(8) should be used ?

0 Kudos
1 Solution
Mark_L_Intel1
Moderator
1,307 Views

Hi Andrey,

I can confirm this since I never heard about this assumption before.

You can also check out the programing guideline in the following section:

https://github.com/Intel-Media-SDK/MediaSDK/blob/master/doc/Coding_guidelines.md#macro

There are more comment in this macro definition:

https://github.com/Intel-Media-SDK/MediaSDK/blob/master/api/include/mfxdefs.h#L53

Let me know if this helps.

Mark

View solution in original post

0 Kudos
4 Replies
sverdlov__andrey
1,307 Views

Update: I've found a problem. It looks like there is something else sensitive to memory align but surfaces size and frame size. In my code the align was set to 1(it was a surprise for me) and this caused a crash. Crash fixed with #pragma pack(8) - the default value.

They say you just need to ask someone and you will find a solution. Thanks! :-)

0 Kudos
sverdlov__andrey
1,306 Views

Update: the memory align is set to 1 outside of this code(#pragma pack(1)). If I use a default align(8) the crash is not occurs. It looks like there is something sensitive to memory align in the structures except the Data pointer and frame size ? Can you confirm this ? Thanks.

0 Kudos
Mark_L_Intel1
Moderator
1,308 Views

Hi Andrey,

I can confirm this since I never heard about this assumption before.

You can also check out the programing guideline in the following section:

https://github.com/Intel-Media-SDK/MediaSDK/blob/master/doc/Coding_guidelines.md#macro

There are more comment in this macro definition:

https://github.com/Intel-Media-SDK/MediaSDK/blob/master/api/include/mfxdefs.h#L53

Let me know if this helps.

Mark

0 Kudos
sverdlov__andrey
1,307 Views

Hello

Yes. This is helpful. Sorry for duplicating posts(I didn't see they are posted).

As I mentioned, I used the pack(8) to avoid this issue. Now I can see the align 4 or 8 is the Intel requirement.

Thanks!

0 Kudos
Reply