Intel® Integrated Performance Primitives
Deliberate problems developing high-performance vision, signal, security, and storage applications.

Video Render

bale
Beginner
418 Views
Hi all,

Is there some sample code or a step by step guide available that implements a windows video render (DX or GDI)?

Thanks,

Bale

0 Kudos
6 Replies
franknatoli
New Contributor I
418 Views
When you say "implements" do you mean uses an existing IPP/UMC video renderer? Or write a new one from the ground up? If you mean the former, simple_player.cpp calls AVSync, which itself is a layer that abstracts access to different decoders and renderers. You can compile simple_player.cpp in debug mode, then step through it and observe what it does. The code examples in umc-manual.pdf tend to not quite run when copied verbatim. If all of the above fails, I can post a sample use of video decoder that works for me.
0 Kudos
bale
Beginner
418 Views
Hi,

Yes, I mean use the GDIRenderer to display video on the screen. I had not seen AVSync in the code, but I'll have a look through the sampleplayer code.

Out of interest, when you wrote your decoder, did you use AVSync also or could you use the GDI/DX renderer directly?? Which would you recommend?

Thanks,

Bale
0 Kudos
franknatoli
New Contributor I
418 Views
I did not use AVSync because, as far as I can tell, AVSync having zero documentation in umc-manual.pdf, there is no individual frame display control. You let it run, and a dedicated thread takes over until you say stop. My particular development context is multiple streams on multiple MDI child windows, so I need finer control than AVSync permits.

I'm afraid my efforts to use GDIVideoRender came to nought. I decided to coerce the VideoData object produced by decoder GetFrame to deliver RGB32 bitmaps and I handled that data with my own OnPaint code.
0 Kudos
bale
Beginner
418 Views

Yes, that is what I am resigned to do (display bitmaps). It is a pity that the GDIRenderer isn't better documented/supported as windows display can be a little messy...

Regarding your display, what form does the video have to be in for the bitmap? At the moment, I have aYUV420 frame outputted from the decoder, and I use the ippiYUV420ToRGB_8u_P3(yuvArray, rgbArray, roiSize); function to convert it to RGB data. If I then try to create a bitmap and display it, the colours are incorrect. The command to create the bitmap I use is:

Bitmap* newBitmap = new Bitmap(width, height , width, PixelFormat32bppRGB, (BYTE*)rgbBuffer);

I am not sure where I am going wrong, but I guess it is one of three places:
(1) The Intel IPP RGB format that I convert to (currently RGB).
(2) The 'stride' of the Bitmap (currently set to be the video width), or
(3) The PixelFormat (currently set to PixelFormat32bppRGB).

I have played around with all 3 of these settings, but cannot seem to get correct video. What settings have you got for these??

Thanks for your help so far,

Bale
0 Kudos
franknatoli
New Contributor I
418 Views
Ah, there is a much simpler solution. When you Init your VideoData object, make the third argument the mode that corresponds to your video hardware, e.g.:

//-------------------------------------------------------------------------
// initialize VideoOutput object
//-------------------------------------------------------------------------
UMC::VideoData videoOut;
umcResult = videoOut.SetAlignment(1);
if (umcResult != UMC::UMC_OK)
{
str.Format("VideeoThread %s VideoData::SetAlignment failure %d", (LPCTSTR)m_strStreamPath, umcResult);
AfxMessageBox(str);
decoder->Reset();
decoder->Close();
splitter->Stop();
splitter->Close();
reader.Close();
return;
}

umcResult = videoOut.Init(
videoTrackInfo->clip_info.width,
videoTrackInfo->clip_info.height,
UMC::RGB32); // formerly videoTrackInfo->color_format
if (umcResult != UMC::UMC_OK)
{
str.Format("VideoThread %s VideoData::Init failure %d", (LPCTSTR)m_strStreamPath, umcResult);
AfxMessageBox(str);
decoder->Reset();
decoder->Close();
splitter->Stop();
splitter->Close();
reader.Close();
return;
}

size_t videoSize = videoTrackInfo->clip_info.width * videoTrackInfo->clip_info.height * 4;
Ipp8u *lpVideo = (Ipp8u*)new BYTE[videoSize];
if (!lpVideo)
{
AfxMessageBox("Memory allocation failure");
videoOut.Close();
decoder->Reset();
decoder->Close();
splitter->Stop();
splitter->Close();
reader.Close();
return;
}
umcResult = videoOut.SetBufferPointer(lpVideo, videoSize);
if (umcResult != UMC::UMC_OK)
{
str.Format("VideoThread %s VideoData::SetBufferPointer failure %d", (LPCTSTR)m_strStreamPath, umcResult);
AfxMessageBox(str);
videoOut.Close();
decoder->Reset();
decoder->Close();
spl itter->Stop();
splitter->Close();
reader.Close();
return;
}

videoOut.SetDataSize(0);

Then when you call GetFrame, you'll indeed get video frames in RGB32, no fuss, no muss, no need to fiddle with YUV conversion, e.g.:

// get video data from the splitter
UMC::MediaData videoIn;
umcResult = splitter->GetNextData(&videoIn, videoTrack);
...
// if call to GetNextData was entirely successful then pass MediaData input to GetFrame
if (umcResult == UMC::UMC_OK)
umcResult = decoder->GetFrame(&videoIn, &videoOut);
// if call to GetNextData was not entirely successful then pass NULL input to GetFrame
else
umcResult = decoder->GetFrame(NULL, &videoOut);
...
struct UMC::VideoData::PlaneInfo planeInfo;
for (int plane = 0; plane < videoOut.GetNumPlanes(); plane++)
{
videoOut.GetPlaneInfo(&planeInfo, plane);
TRACE("VideoThread %s plane %d m_pPlane 0x%08X width %d height %d iSampleSize %d iSamples %d iBitDepth %d nPitch %d nOffset 0x%X nMemSize %u ",
(LPCTSTR)m_strStreamPath,
plane,
planeInfo.m_pPlane,
planeInfo.m_ippSize.width,
planeInfo.m_ippSize.height,
planeInfo.m_iSampleSize,
planeInfo.m_iSamples,
planeInfo.m_iBitDepth,
planeInfo.m_nPitch,
planeInfo.m_nOffset,
planeInfo.m_nMemSize);

// if first plane then pass planar data to view
if (plane == 0)
{
// if size of video frame has changed
if (m_lpFrameData &&
(planeInfo.m_ippSize.width != m_nFrameWidth || planeInfo.m_ippSize.height != m_nFrameHeight))
{
delete m_lpFrameData;
m_lpFrameData = NULL;
}

// if video frame not allocated
if (!m_lpFrameData)
{
// check that memory size as expected
if (planeInfo.m_nMemSize != planeInfo.m_ippSize.width * planeInfo.m_ippSize.height * 4)
{
str.Format("VideoThread %s m_nMemsize actual %u expected %u",
(LPCTSTR)m_strStreamPath, planeInfo.m_nMemSize, planeInfo.m_ippSize.width * planeInfo.m_ippSize.height);
AfxMessageBox(str);
videoOut.Close();
decoder->Reset();
decoder->Close();
splitter->Stop();
splitter->Close();
reader.Close();
&nbs p; return;
}

// save parameters
m_nFrameWidth = planeInfo.m_ippSize.width;
m_nFrameHeight = planeInfo.m_ippSize.height;

// allocate frame buffer
m_lpFrameData = new BYTE[planeInfo.m_ippSize.width * planeInfo.m_ippSize.height * 4];
if (!m_lpFrameData)
{
AfxMessageBox("Memory allocation failure");
videoOut.Close();
decoder->Reset();
decoder->Close();
splitter->Stop();
splitter->Close();
reader.Close();
return;
}
}

// copy video frame data
memcpy(m_lpFrameData, planeInfo.m_pPlane, planeInfo.m_nMemSize);

// repaint view
Invalidate(m_bErase);

Then in your OnPaint, which is triggered by the Invalidate, all you need to do is:

DWORD dwVideoBitsPerPixel = dc.GetDeviceCaps(BITSPIXEL);

if (m_lpBitmap)< br> {
// if CBitmap dimensions has changed since last allocated
if (m_nBitmapWidth != m_nFrameWidth ||
m_nBitmapHeight != m_nFrameHeight)
{
m_lpBitmap->DeleteObject();
m_lpBitmap->CreateBitmap(m_nFrameWidth, m_nFrameHeight, 1, dwVideoBitsPerPixel, NULL);
m_nBitmapWidth = m_nFrameWidth;
m_nBitmapHeight = m_nFrameHeight;
}
}
// if CBitmap object not already allocated
else
{
m_lpBitmap = new CBitmap();
m_lpBitmap->CreateBitmap(m_nFrameWidth, m_nFrameHeight, 1, dwVideoBitsPerPixel, NULL);
m_nBitmapWidth = m_nFrameWidth;
m_nBitmapHeight = m_nFrameHeight;
}

// copy frame bitmap into GDI bitmap
m_lpBitmap->SetBitmapBits(m_nBitmapWidth * m_nBitmapHeight * dwVideoBitsPerPixel >> 3, m_lpFrameData);
CDC dcMem;
dcMem.CreateCompatibleDC(&dc);
CBitmap *BitmapOld = dcMem.SelectObject(m_lpBitmap);
dc.BitBlt(0, 0, m_nBitmapWidth, m_nBitmapHeight, &dcMem, 0, 0, SRCCOPY);
dcMem.SelectObject(BitmapOld);
dcMem.DeleteDC();

If the above is not clear, let me know, I'll attach the complete source.
0 Kudos
bale
Beginner
418 Views

Thanks again for the reply.

Unfortunately, I need to be able to read both YUV and RGB data for display, so I will have to use either the IPP color space converters or write one of my own (for YUV, for the RGB I can do as you indicated).

Could you send the source to balebrennan@gmail.com if you can? I am not really used to windows programming (Bitmaps, handles etc) and would like to play around with it to see if I can figure it out.

Thanks for all your help!

Bale
0 Kudos
Reply