- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi all,
I am developing an application which involves up-scaling and de-interlacing of video frames. I am doing this in order to convert the received 480i frames to 1080p. I am taking VPP sample as the reference to develop my application. I am receiving frames using API "GetFrame()". This API returns the pointer to the received frame (pointer to frame buffer). My concern is can I use this pointer (to frame buffer) with "RunFrameVPPAsync" ?
I imagine that this received frame buffer has to be transformed into some standard structure acceptable by this (RunFrameVPPAsync) API for the successful video processing.
Are there any chances that I can use this frame buffer pointer with "RunFrameVPPAsync" API to get my things done ?
Regards,
Sumeet Jain
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Sumeet,
The short answer to your question is, Yes you can with some additional steps.
The RunFrameVPPAsync() function accepts the input parameter in the mfxFrameSurface1 structure. This structure is like any standard structure that has parameters defining the properties of the frame in addition to pointer to the data. There are more details in mediasdk-man.pdf (p108/p113-114) in docs/ about this structure.
The transformation you have to perform will look like this:
- Populate the mfxFrameSurface1->mfxFrameInfo pointer with frame information -- pretty straight forward
- Use the pointer from GetFrame() to populate the mfxFrameSurface1->mfxFrameData-> *Y, *U, *V planes -- There is a very good example in sample_vpp that uses LoadNextFrame() function to load the 3 planes of YUV for VPP transformation.
If I misunderstood you question, or you need more information, please let me know.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Sumeet,
The short answer to your question is, Yes you can with some additional steps.
The RunFrameVPPAsync() function accepts the input parameter in the mfxFrameSurface1 structure. This structure is like any standard structure that has parameters defining the properties of the frame in addition to pointer to the data. There are more details in mediasdk-man.pdf (p108/p113-114) in docs/ about this structure.
The transformation you have to perform will look like this:
- Populate the mfxFrameSurface1->mfxFrameInfo pointer with frame information -- pretty straight forward
- Use the pointer from GetFrame() to populate the mfxFrameSurface1->mfxFrameData-> *Y, *U, *V planes -- There is a very good example in sample_vpp that uses LoadNextFrame() function to load the 3 planes of YUV for VPP transformation.
If I misunderstood you question, or you need more information, please let me know.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Sravanthi,
Thanks for the valuable response. I am planning to move ahead with this approach.
Regards,
Sumeet Jain
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I am unable to determine the kind of structure "GetFrame" will return.
Regards,
Sumeet Jain
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I tried setting the elements of structure mfxFrameSurface1 using the pointer of frame buffer returned by "GetFrame".
The code sinppet is as below :
mfxStatus LoadNextFrame(mfxFrameData* pData, mfxFrameInfo* pInfo, BYTE *pImage) { MSDK_CHECK_POINTER(pData, MFX_ERR_NOT_INITIALIZED); MSDK_CHECK_POINTER(pInfo, MFX_ERR_NOT_INITIALIZED); mfxU32 w, h, i, j, pitch; mfxU32 nBytesRead; mfxU8 *ptr; if (pInfo->CropH > 0 && pInfo->CropW > 0) { w = pInfo->CropW; h = pInfo->CropH; } else { w = pInfo->Width; h = pInfo->Height; } pitch = pData->Pitch; if (pInfo->FourCC == MFX_FOURCC_YUY2) { ptr = pData->Y + pInfo->CropX + pInfo->CropY * pitch; for (i = 0; i < h; i++) { //nBytesRead = (mfxU32)fread(ptr + i * pitch, 1, 2 * w, m_fSrc); ptr = ptr + i * pitch; for (j = 0; j < 2 * w; j++) { *ptr = *(pImage++); ptr++; } } }
Where pImage is the pointer to the frame buffer returned by "GetFrame". Am I doing it correctly ?
Regards,
Sumeet Jain
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Sumeet,
Looks like you are working with YUY2 format for your input? If so, yes, this snippet works for populating the *Data pointer in mfxFrameData.
Some suggestions - Now that you have taken a peek at sample_vpp, you can model your reading of input frame from this sample example. You can do away with GetFrame() function, and use fread directly as given in the sample to populate the *Data pointer. And using YUV12 format, in my view, is so much more convenient and tested!
If you have more questions, please feel free to attach you source code and/or your issues so that I can get a fuller picture. Thanks!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi SRAVANTHI,
Thanks for the great support. The application development is time bound which restricts me to get into the depths of intel media sdk.
Before moving ahead with the implementation I would like you to review my approach.
It is as below :-
1) Get the interface for Graph builder
Snippet :-
hr=CoCreateInstance(CLSID_FilterGraph, NULL, CLSCTX_INPROC_SERVER, IID_IGraphBuilder, (void **)&m_pGB); hr=m_pGB->AddFilter(m_pDF, L"Video Capture");
2) Use the SampleGrabber to capture the sample
Snippet :-
hr = CoCreateInstance(CLSID_SampleGrabber, NULL, CLSCTX_INPROC_SERVER, IID_PPV_ARGS(&pGrabber)); hr = m_pGB->AddFilter(pGrabber, L"Sample Grabber");
3) After this I am planning to add "SetCallback", which registers the callback function that gets called for each sample/frame passing through the graph. Now in this callback function I would integrate all the VPP related stuff for processing the frames.
Snippet (callback function):-
pSample->GetPointer(&pBuf);
Now the pointer returned will be used for VPP. I hope that this pointer points to the Video Data Frame. Can you please confirm this ? After this I would copy the video data to the user allocated buffer and process it. After processing it, I would copy the entire data to the same address that was returned by "GetPointer()". The link http://forum.infognition.com/index.php?topic=273.0;wap2 says that "pBuf will point to array of width * height * 4 bytes: blue, green, red and alpha (unused) for each pixel" if the colour-space is RGB. If its YUY2, only the pixels arrangement would be different.
4) After successfully processing a frame, the following code will be executed.
Snippet :-
hr = m_pGB->Render(m_pCamOutPin); hr = m_pMC->Run();
I hope that this approach works out.
Regards,
Sumeet Jain
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi SRAVANTHI,
I did an experiment which proves that the pointer returned by GetPointer() actually points to Video Data Buffer. May be you too can confirm this. Experiment snippet as follows :-
STDMETHODIMP CallbackObject::SampleCB(double SampleTime, IMediaSample *pSample) { if (!pSample) return E_POINTER; long sz = pSample->GetActualDataLength(); BYTE *pBuf = NULL; pSample->GetPointer(&pBuf); if (sz <= 0 || pBuf == NULL) return E_UNEXPECTED; ZeroMemory(pBuf, sz); }
When I ran this application, I could see my picture control box which originally showed Live Video was occupied by green pixels.
Regards,
Sumeet Jain
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Sumeet,
Thank you for your question. Regarding your previous two posts, I see that you are referring to code used from infognition. Our experience is with Media SDK and the samples/tutorials attached with it. So, if you could kindly look into Media SDK and the easy-to-work-with tutorials it provides, I think it can greatly reduce your development time. And we have tutorials (as I mentioned in my first post) that directly deal with what you want to achieve.
To help your introduction, here is a link to some really well-writtem articles on Media SDK to get you started - https://software.intel.com/en-us/forums/topic/530561. You can download the tutorials from here: http://software.intel.com/sites/default/files/mediasdk-tutorials-0.0.3.zip
, and this table https://software.intel.com/en-us/articles/media-sdk-tutorial-tutorial-samples-index provides a very nice summary of what each tutorial tries to present to the developer. If you have ANY questions on Media SDK/tutorials, I can help you surely.
Meantime, I can check with my colleagues here if they have any insights into the code you pasted above.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks a lot for the response. I will go through this links and update you.
Regards,
Sumeet Jain
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Sumeet,
The high-level suggestion it to use the Media SDK tutorials and code examples for this. I am attaching the common_utils.cpp file that contains functions for loading YUV/RGB frames and populating the mfxVideoParams - hope this helps. All our tutorials use these functions. For processing on VPP, the input should be in NV12 format. The LoadRawFrame() function loads and converts a YUV to NV12 - so that is a good starting point.
If you want to use color conversion functions, our older tutorial, simple_4_encode_IPP_CC is a good example to start with. It reads an RGB input, converts to YUV.
In short, the tutorials are a very good resource and very straightforward examples to fix your issue. We highly recommend using them. If you do use them and have some question based off of that, let us know.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Attached is the common_utils.cpp file.
In addition, below is the code snippet from the simple_4_encode_IPP_CC tutorial I referred to, that reads in RGB and converts to YUV. You can find the tutorial here: http://software.intel.com/protected-download/267277/354922
for (int i = 0; i < inputHeight; i++) { nBytesRead = (mfxU32)fread(pRGB + i * stridesrcRGB, 3, inputWidth, fSource); if (inputWidth != nBytesRead) { readsts= MFX_ERR_MORE_DATA; break; } }
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi SRAVANTHI,
Thanks for the comments and guidance. Its absolutely true that Intel Media SDK is sufficient enough to satisfy our needs.
Things I will like to emphasize:
1) We have to process live frames and output them to the display in real time.
2) Our input will be in YUV format and output can be of any format (NV12 would be fine).
3) I have implemented a callback function that gets called whenever a video frame is received and in that callback function I have implemented the entire VPP functionality to process frames and then be displayed on the picture control display. Now I fathom that this approach is viable (let's hope). The only thing that concerns me is the processing period of each frame. Hope it doesn't introduce latency relevant stuff.
Your comments/feedbacks are worth a lot.
Thanks,
Sumeet Jain
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi SRAVANTHI,
Can you please send me your mail ID ? so that I can share my source code. I am facing a minor issue where my :-
sts = pProcessor->pmfxVPP->Init(pParams);
from sample VPP code is failing. I am not able to resolve it but still trying. Can you please walk through the code (though I will help you in understanding the code flow). The error I am getting is :-
MFX_ERR_INVALID_VIDEO_PARAM
Which means that I am not passing video parameters appropriately. I verified multiple times but still failed to discover the reason for failure.
Can you please help me out in this ?
Eagerly awaiting your response !
Regards,
Sumeet Jain
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Sumeet, I just sent you a PM by using "Send Author A Message" option, You can respond to that by sending in all the details you want us to look at. Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Sravanthi,
The Intel Media SDK code has been integrated with my project but the results are not as expected. Can you please help me resolve this ? It will be a great help.
I am right now just implementing VPP for De-interlacing (not up-scaling) by setting -spic and -dpic as interlace and progressive respectively. Also I am setting input frame rate as 60 and output frame rate as 30.
You can look for VPP implementation in the function 'MergeBuffer' and 'CAuxilaryFunctions' of file 'AuxFunctions.cpp' in the package. Can you look into the code flow and whether all the parameters are set appropriately ?
PS : The package is 'dll' package.
Thanks,
Sumeet Jain
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Sravanthi,
Any updates ?
Regards,
Sumeet Jain
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page