- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
There have been similar posts but none have been terribly helpful to me. I am trying to use the Media SDK to add such a video decoder to our application. We have a real-time media server which decodes and encodes video and more. I have followed the sample code to the best of my ability to incorporate the decoder into our architecture. The decoder receives frames in real-time providing a YUV frame if possible. I do not have the code but I do have some detailed debug which includes the setup data structure contents and relevant API calls. One difference from the sample code is the use of YV12, I did verify the Y, U, V pointers are set up properly. Looking for some suggestions as to where to focus. The MFX_ERR_UNDEFINED_BEHAVIOR error is pretty much useless. This is on a CentOS 7 system where I am able to run the Sample Decoder for file decoding.
VA symbols loaded from libva.so
0(MFX_ERR_NONE) = m_mfxSession.InitEx()
0(MFX_ERR_NONE) = MFXQueryVersion(0x7f82e40efb30, 0x7f838c7c473c) 1.16
Print_mfxBitstream: DecodeTimeStamp 0 TimeStamp 0 Data 0x7f8328143018 DataOffset 0 DataLength 18722 MaxLength 18722 PicStruct 0 FrameType 0 DataFlag 0
0(MFX_ERR_NONE) = DecodeHeader(0x7f832b5f97e0 [0, 18131, 18722, 0x7f8328143018], 0x7f82e40ee528)
Print_mfxBitstream: DecodeTimeStamp 0 TimeStamp 0 Data 0x7f8328143018 DataOffset 591 DataLength 18131 MaxLength 18722 PicStruct 0 FrameType 0 DataFlag 0
libva info: VA-API version 0.35.0
libva info: va_getDriverName() returns 0
libva info: User requested driver 'iHD'
libva info: Trying to open /opt/intel/mediasdk/lib64/iHD_drv_video.so
libva info: Found init function __vaDriverInit_0_32
libva info: va_openDriver() returns 0
PrintVideoParmsMFX: AsyncDepth 1 Protected 0 IOPattern 32 ExtParam (nil) NumExtParam 0
PrintVideoParmsMFX: mfx: FrameInfo Width 640 Height 480
PrintVideoParmsMFX: mfx: CodecId 0x20435641 CodecProfile 66 CodecLevel 21 NumThread 0
0(MFX_ERR_NONE) = m_pmfxDEC->Init(0x7f82e40ee528)
PrintVideoParmsMFX: AsyncDepth 1 Protected 0 IOPattern 32 ExtParam (nil) NumExtParam 0
PrintVideoParmsMFX: mfx: FrameInfo Width 640 Height 480
PrintVideoParmsMFX: mfx: CodecId 0x20435641 CodecProfile 66 CodecLevel 21 NumThread 0
Stage 1: Main decoding loop 0(MFX_ERR_NONE)
Print_mfxBitstream: DecodeTimeStamp 0 TimeStamp 0 Data 0x7f8328143018 DataOffset 591 DataLength 18722 MaxLength 18722 PicStruct 0 FrameType 0 DataFlag 0
Print_mfxFrameSurface1: Info: BitDepthLuma 0 BitDepthChroma 0 Shift 0 FourCC YV12 Width 640 Height 480 CropX 0 CropY 0 CropW 0 CropH 0 FrameRateExtN 60 FrameRateExtD 2 AspectRatioW 1 AspectRatioH 1 PicStruct 1 ChromaFormat 1
Print_mfxFrameSurface1: Data: ExtParam (nil) NumExtParam 0 PitchHigh 0 TimeStamp 0 FrameOrder 0 Locked 0 Pitch 640 Y 0x7f83280150a8 V 0x7f83280600a8 U 0x7f8328072ca8 A (nil) Corrupted 0 DataFlag 0
-16(MFX_ERR_UNDEFINED_BEHAVIOR) = DecodeFrameAsync(0x7f832b5f97e0, 0x7f832b5f9720, 0x7f832b5f9830, 0x7f832b5f9828)
The DecodeHeader clearly accepts the bitstream and determines the surface details properly. Just can't seem to decode.
I could provide some of the code and more details if needed.
Thanks - Bob.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Bob,
Thanks for providing details, but we will need additional details to debug the problem like
You have h264 bitstream which you pass through through the decoder and you would like the raw o/p in YV12 format?
Can you please explain the pipeline, this will help us to understand your problem better.
Along with that it will be helpful to know your system configuration, Media Server Studio version you are using? If you have a reproducer that will definitely help too to reproduce the issue locally. You can use private message to send us the reproducer if you can send us one.
In general, Media SDK decodes to NV12 format by default, you can set up VPP pipeline after that to change the color format to YV12.
Thanks,
Surbhi
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Surbhi,
The media server accepts and provides real-time audio and video over RTP. Our framework handles depacketization and packetization. My initial attempt is to add a Media SDK based H.264 decoder to this framework. Our existing H.264 implementation is based on the IPP sample code and has been working pretty reliably for quite some time. We process ingress packets in real-time, passing the extracted bitstream frames to the video decoder to obtain a YUV (YUV420 YV12 format) frame. All our existing media processing resources work with this YUV format.
The initial work I did was using NV12 because this is what the sample decoder was doing. I was not expecting the proper YUV format, mainly just looking for no errors from DecodeFrameAsync. Are you saying that I must first decode to NV12 then use a software implementation to convert this to YV12?
The test scenario simply uses linphone to call into our system providing an H.264 bitstream to the Media SDK decoder. The Media SDK bitstream object is manually configured based on the depacketized data we pass in. I was initially using a single decode surface frame, setting the object to reference an output buffer our framework allocates and passes in to the decoder. The DecodeHeader API is successful at parsing the input bitstream frames (containing SPS / PPS) since it determines the correct dimensions, profile, level, etc. Only DecodeFrameAsync has been unsuccessful. I compared the various data structures provided to the key API's and both the sample decoder and our implementation are consistent.
I have since been attempting to use the system memory allocation (make our implementation closer to that used by the sample decoder) for the surfaces but have yet to achieve any positive change with respect to the DecodeFrameAsync API.
The SDK version is MediaServerStudioProfessionalEvaluation2015R6. It would take some time to come up with a simple reproducer, our framework is quite complex and I would need to port our implementation to a simplified application.
Thanks, Bob
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Looks like I solved the main issue after adding the surface allocation and related code as opposed to initializing a surface surface structure to reference the YUV buffer provided to our decode method. I now see that DecodeFrameAsync is able to take the provided bitstream and generate YUV frames without errors. It does appear that it will only generate NV12 format so I will have to add the VPP code to convert to YV12 as you suggested.
Thanks - Bob
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'm glad you could resolve the issue. By default the MSDK decoder will only decode in NV12 format and needs VPP stage to convert to YV12. VPP CC conversion can take place in video memory i.e. using hw acceleration.
Not sure if you have already check tutorials and article on framework. If not, I will advice you to look at tutorial codes which are pretty simple as compared to samples, they are more for basic understanding of MSDK pipeline and not for measuring performance and MSDK Framework article. I think this will help you to understand the MSDK framework and how you can set pipelines using MSDK in general.
Wish you happy development!
Thanks,
Surbhi
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I ported the code from the sample decoder which uses VPP. Having some trouble getting the VPP to initialize to do the NV12 to YV12 conversion. The Query function indicates this is unsupported. Below is debug showing the video parameters before and after the call to Query. Any suggestions?
AllocFrames: mfxVideoParam: AsyncDepth 4 Protected 0 IOPattern 33 ExtParam 0x7f2de00ee830 NumExtParam 1
AllocFrames: mfxVideoParam: vpp: In Width 640 Height 480 FourCC NV12 ChromaFormat 1 PicStruct 1
AllocFrames: mfxVideoParam: vpp: Out Width 640 Height 480 FourCC YV12 ChromaFormat 1 PicStruct 1
AllocFrames: -3(MFX_ERR_UNSUPPORTED) = m_pmfxVPP->Query(0x7f2de00ee710, 0x7f2de00ee710, 0x7f2e10025b90)
AllocFrames: mfxVideoParam: AsyncDepth 4 Protected 0 IOPattern 33 ExtParam 0x7f2de00ee830 NumExtParam 1
AllocFrames: mfxVideoParam: vpp: In Width 640 Height 480 FourCC NV12 ChromaFormat 1 PicStruct 1
AllocFrames: mfxVideoParam: vpp: Out Width 640 Height 480 FourCC ChromaFormat 1 PicStruct 1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Bob,
After checking the manual, I realized we can't convert NV12->YV12 through VPP call due to which you are getting unsupported error. Since the difference btw YV12 and NV12 is the way chroma components are packed, through Media SDK API you can access all three 3 plane with pointer so you can write the o/p in YV12 format which will include small change to the fwrite() function.
Sorry I missed this thread, due to which it took me a while to get back.
Thanks,
Surbhi
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Ok, thanks Surbhi, Asymmetry, gotta love it.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page