Media (Intel® Video Processing Library, Intel Media SDK)
Access community support with transcoding, decoding, and encoding in applications using media tools like Intel® oneAPI Video Processing Library and Intel® Media SDK
Announcements
The Intel Media SDK project is no longer active. For continued support and access to new features, Intel Media SDK users are encouraged to read the transition guide on upgrading from Intel® Media SDK to Intel® Video Processing Library (VPL), and to move to VPL as soon as possible.
For more information, see the VPL website.

question about sample_decode

MyMother
Beginner
359 Views

hi Intel-giant,

OS: Ubuntu 12.04

       MediaSamples_Linux_6.0.16043175.175

       MediaServerStudioEssentials2015R6

Platform:  i5-4570S

       I use sample_decode_drm and got the following messages, I have questions

       Q1. the frame number is 187 and the "ReadNextFrame" was shown only 4 times. Is there any document describing this? Or could you explain this?

       Q2. I have a frame based bit-stream, and I have no clear idea about feeding it. Any hint/document for me?

[release] $ ./sample_decode_drm h264 -sw -i out.264
############# (sample_utils.cpp|ReadNextFrame|511)
Decoding Sample Version 0.0.000.0000

Input video     AVC
Output format   YUV420
Resolution      1920x1088
Crop X,Y,W,H    0,0,0,0
Frame rate      30.00
Memory type             system
MediaSDK impl           sw
MediaSDK version        1.16

Decoding started
############# (sample_utils.cpp|ReadNextFrame|511)
############# (sample_utils.cpp|ReadNextFrame|511)
############# (sample_utils.cpp|ReadNextFrame|511)
Frame number:  187, fps: 245.124, fread_fps: 0.000, fwrite_fps: 0.000
Decoding finished

 

[release] $ ./sample_decode_drm h264 -hw -i out.264
############# (sample_utils.cpp|ReadNextFrame|511)
libva info: VA-API version 0.35.0
libva info: va_getDriverName() returns 0
libva info: User requested driver 'iHD'
libva info: Trying to open /opt/intel/mediasdk/lib64/iHD_drv_video.so
libva info: Found init function __vaDriverInit_0_32
libva info: va_openDriver() returns 0
Decoding Sample Version 0.0.000.0000

Input video     AVC
Output format   YUV420
Resolution      1920x1088
Crop X,Y,W,H    0,0,0,0
Frame rate      30.00
Memory type             system
MediaSDK impl           hw
MediaSDK version        1.16

Decoding started
############# (sample_utils.cpp|ReadNextFrame|511)
############# (sample_utils.cpp|ReadNextFrame|511)
############# (sample_utils.cpp|ReadNextFrame|511)
Frame number:  187, fps: 688.082, fread_fps: 0.000, fwrite_fps: 0.000
Decoding finished

0 Kudos
5 Replies
Jeffrey_M_Intel1
Employee
359 Views

Q1: Do you get an output yuv file from sample_decode with the hardware and software implementation?  Do both have the right number of frames?  ./sample_decode_drm h264 -sw -i out.264 -o test.yuv

Q2: Network or file input can be handled similarly.  Unfortunately we don't have as much streaming documentation/examples as we would like yet, but you should be able to simulate many streaming scenarios with files as you develop.  For example, you could read file input in frame-sized chunks as you work through your buffering approach.  The call to DecodeFrameAsync can be one with any buffer size you choose -- you can feed decode 1 byte at a time if you want, or with a buffer several MB long.  The tutorials at https://software.intel.com/en-us/intel-media-server-studio-support/training have some simpler starting points than the samples to show how to set up a minimal decode which you can expand for streaming support.

0 Kudos
MyMother
Beginner
359 Views

hi Jeffrey Mcallister -super man

   Many thanks for your reply.

   Q1.  Do both have the right number of frames?

>> I have tried the recommended command and verify with ffmpeg. Both have the same frame number as the one from ffmpeg.

[release] $ ./sample_decode_drm h264 -sw -i out.264 -o sw_out.yuv

Frame number:  187, fps: 28.866, fread_fps: 0.000, fwrite_fps: 29.342

Decoding finished

and verify with ffmpeg

[release] $ ffmpeg -f rawvideo -pix_fmt yuv420p -s:v 1920x1088 -r 30 -i sw_out.yuv -c:v libx264 -f rawvideo -b 2048k -bf 0 sw_out.h264
frame=  187 fps= 48 q=-1.0 Lsize=    1413kB time=00:00:06.23 bitrate=1856.4kbits/s

[release] $ ./sample_decode_drm h264 -hw -i out.264 -o hw_out.yuv

Frame number:  187, fps: 38.715, fread_fps: 0.000, fwrite_fps: 39.206

and verify with ffmpeg

[release] $ ffmpeg -f rawvideo -pix_fmt yuv420p -s:v 1920x1088 -r 30 -i hw_out.yuv -c:v libx264 -f rawvideo -b 2048k -bf 0 hw_out.h264
frame=  187 fps= 49 q=-1.0 Lsize=    1413kB time=00:00:06.23 bitrate=1857.2kbits/

I also verify the original bitstream

[release] $ ffprobe -v error -count_frames -select_streams v:0 \
>   -show_entries stream=nb_read_frames -of default=nokey=1:noprint_wrappers=1 \
>   out.264
187

 

 

   Q2. The call to DecodeFrameAsync can be one with any buffer size you choose -- you can feed decode 1 byte at a time if you want, or with a buffer several MB long.

>> Do you mean I can feed part of a frame BS into DecodeFrameAsync several times and the frame BS would be composed and decoded inside DecodeFrameAsync ??

 

0 Kudos
Jeffrey_M_Intel1
Employee
359 Views

Q1: Very glad that you're getting the right output  with -o.  The capability to run without an output was added for the special case of benchmarking decode separate from render or disk I/O, but you should start with -o.  The _x11 decode sample also has a -r option for rendering to the screen.

Q2: Yes.  The DecodeFrameAsync interface is very general.  You can feed it data in any increments you want.  If more data is needed to decode the next frame just keep supplying it when DecodeFrameAsync returns MFX_ERR_MORE_DATA.  Media SDK will take care of composing the fragments until it has enough data to decode a frame.

0 Kudos
MyMother
Beginner
359 Views

hi Jeffrey Mcallister -super man,

     Many thanks for your reply.

     If more data is needed to decode the next frame just keep supplying it when DecodeFrameAsync returns MFX_ERR_MORE_DATA.  Media SDK will take care of composing the fragments until it has enough data to decode a frame.

>>  If I don't have more data to feed (e.g. stop playback), will DecodeFrameAsync automatically decode the complete frame data and discard the incomplete frame data which were feed previously ?? If not, how could we do that??

 

0 Kudos
Jeffrey_M_Intel1
Employee
359 Views

Media SDK decode, VPP, and encode require a "draining" step to finalize processing of frames in hardware buffers.  This is done by passing null pointers for inputs.  For examples please see the Media SDK tutorials and samples.

0 Kudos
Reply