Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Varun_R_2
Beginner
86 Views

Init Error

Jump to solution

Hi All,

I was trying to write simple decode application from the scratch. Updated all the required init params in the structure, but while running the applicatiion, getting init error.  

Following is the snapshot of structure parameters:

(gdb) p Params_in 
$3 = {reserved = {0, 0, 0}, reserved3 = 0, AsyncDepth = 1, {mfx = {reserved = {0, 0, 0, 0, 0, 0, 0}, LowPower = 16, BRCParamMultiplier = 0, FrameInfo = {
        reserved = {0, 0, 0, 0}, reserved4 = 0, BitDepthLuma = 0, BitDepthChroma = 0, Shift = 0, FrameId = {TemporalId = 0, PriorityId = 0, {{
              DependencyId = 0, QualityId = 0}, {ViewId = 0}}}, FourCC = 842094158, {{Width = 1280, Height = 720, CropX = 0, CropY = 0, CropW = 1280, 
            CropH = 720}, {BufferSize = 47187200, reserved5 = 47187200}}, FrameRateExtN = 60000, FrameRateExtD = 2002, reserved3 = 0, AspectRatioW = 1, 
        AspectRatioH = 1, PicStruct = 1, ChromaFormat = 1, reserved2 = 0}, CodecId = 541283905, CodecProfile = 128, CodecLevel = 31, NumThread = 0, {{
          TargetUsage = 4, GopPicSize = 1, GopRefDist = 0, GopOptFlag = 0, IdrInterval = 0, RateControlMethod = 0, {InitialDelayInKB = 0, QPI = 0, 
            Accuracy = 0}, BufferSizeInKB = 0, {TargetKbps = 10, QPP = 10, ICQQuality = 10}, {MaxKbps = 0, QPB = 0, Convergence = 0}, NumSlice = 0, 
          NumRefFrame = 0, EncodedOrder = 0}, {DecodedOrder = 4, ExtendedPicStruct = 1, TimeStampCalc = 0, SliceGroupsPresent = 0, 
          MaxDecFrameBuffering = 0, reserved2 = {0, 0, 0, 10, 0, 0, 0, 0}}, {JPEGChromaFormat = 4, Rotation = 1, JPEGColorFormat = 0, InterleavedDec = 0, 
          reserved3 = {0, 0, 0, 0, 10, 0, 0, 0, 0}}, {Interleaved = 4, Quality = 1, RestartInterval = 0, reserved5 = {0, 0, 0, 0, 0, 10, 0, 0, 0, 0}}}}, 
    vpp = {reserved = {0, 0, 0, 0, 0, 0, 0, 16}, In = {reserved = {0, 0, 0, 0}, reserved4 = 0, BitDepthLuma = 0, BitDepthChroma = 0, Shift = 0, FrameId = {
          TemporalId = 0, PriorityId = 0, {{DependencyId = 0, QualityId = 0}, {ViewId = 0}}}, FourCC = 842094158, {{Width = 1280, Height = 720, CropX = 0, 
            CropY = 0, CropW = 1280, CropH = 720}, {BufferSize = 47187200, reserved5 = 47187200}}, FrameRateExtN = 60000, FrameRateExtD = 2002, 
        reserved3 = 0, AspectRatioW = 1, AspectRatioH = 1, PicStruct = 1, ChromaFormat = 1, reserved2 = 0}, Out = {reserved = {541283905, 2031744, 262144, 
          1}, reserved4 = 0, BitDepthLuma = 0, BitDepthChroma = 0, Shift = 0, FrameId = {TemporalId = 0, PriorityId = 10, {{DependencyId = 0, 
              QualityId = 0}, {ViewId = 0}}}, FourCC = 0, {{Width = 0, Height = 0, CropX = 0, CropY = 0, CropW = 0, CropH = 0}, {BufferSize = 0, 
            reserved5 = 0}}, FrameRateExtN = 0, FrameRateExtD = 0, reserved3 = 0, AspectRatioW = 0, AspectRatioH = 0, PicStruct = 0, ChromaFormat = 0, 
        reserved2 = 0}}}, Protected = 0, IOPattern = 2, ExtParam = 0x0, NumExtParam = 0, reserved2 = 0}

Not able to identify the issue. Please help me or give me some pointers. 

 

0 Kudos
1 Solution
Surbhi_M_Intel
Employee
86 Views

Hey Varun, 

Sorry it took me a while to get back to you. Few pointers to improve the performance- 

  • Use video memory instead of system memory if using hardware acceleration so that your pipeline stays on GPU and reduce the no. of copies to increase performance. You can switch to simple_transcode_vmem instead of simple_transcode because that keeps the entire pipeline on video memory. 
  • What is the async depth parameter you are using? Application can use async depth in the pipeline to run asynchronous operation before explicitly synchronize operation. From our experiments if you are doing a single video async depth 4 or 5 gives better performance. 
  • What is the target usage being used for encoding? There are three major modes TU 1- best quality, better quality but often slow ; TU7 - best performance in terms of speed compromise bit on quality; TU 4 - balanced approach. 
  • Other than above pointers, I would recommend you to check the performance through file transcoding first and then adding the layer of streaming so that you can visualize where the bottle neck is. 

Our samples are better written in terms of optimization to show the capability, they are length and little hard to begin so we often request customers to start with tutorials(simpler to understand the pipeline) and move to samples for better performance. In your case if you are just making modifications for writing and reading then it will be worth checking performance using sample_multi_transcode

Unfortunately we don't have any example doing live streaming. All our examples are based of file based pipelines. You can use any player - VLC, ffplay or sample_decode to render. I know VLC and sample_decode can use hardware acceleration to decode the o/p to display.

Thanks,
Surbhi 

 

 

View solution in original post

4 Replies
Surbhi_M_Intel
Employee
86 Views

Hi Varun, 

First thing please make sure your hardware is supported, can be easily checked through samples or pre-build binaries or send us the output of system analyzer, installed in the media sdk directory under tools. After that, please take a look at our tutorials, they are pretty simple as compared to long samples. You should be able to compare your application with simple tutorial we have in tutorial package, can be downloaded from https://software.intel.com/en-us/intel-media-server-studio-support/code-samples#tutorials. 

Thanks,
Surbhi

Varun_R_2
Beginner
86 Views

Hi Surbhi,

Thanks for the details. 

I edited, mediasdk-tutorials-0.0.3/simple_5_transcode code, to transcode h264 to MPEG2 from network stream (instead of reading from file, which acts as a server) over UDP.

Using ffmpeg, streaming webcam video to the above application. Server is receiving the video and is performing transcoding. But the video clarity is not that good. I changed the only the following to encode parameter to MPEG2, rest all are same.

mfxEncParams.mfx.CodecId = MFX_CODEC_AVC changed to
mfxEncParams.mfx.CodecId = MFX_CODEC_MPEG2;

I'm attaching transcode file and commands used. 

Transcode application: (input file will not be used and encoded output will be written to tom.mpeg2)

./simple_transcode -b 1000 -f 30/1 /home/vrapelly/Test/out_qsv.h264 tom.mpeg2

FFMPEG application: (copied cli options from some page found in internet)

ffmpeg -f v4l2 -framerate 30 -re -video_size 1024x2048 -i /dev/video0 -b:v 2k -bufsize 128k  -f h264 "udp://10.70.52.97:6900" -r 30

Please let me know:

1. Is there any other sample applications to demonstrate live transcoding operations (not reading from and writing to a file)?

2. If i have to play the encoded stream, using vlc player or by any other tool, what is the recommended way of doing it? 

 

Surbhi_M_Intel
Employee
87 Views

Hey Varun, 

Sorry it took me a while to get back to you. Few pointers to improve the performance- 

  • Use video memory instead of system memory if using hardware acceleration so that your pipeline stays on GPU and reduce the no. of copies to increase performance. You can switch to simple_transcode_vmem instead of simple_transcode because that keeps the entire pipeline on video memory. 
  • What is the async depth parameter you are using? Application can use async depth in the pipeline to run asynchronous operation before explicitly synchronize operation. From our experiments if you are doing a single video async depth 4 or 5 gives better performance. 
  • What is the target usage being used for encoding? There are three major modes TU 1- best quality, better quality but often slow ; TU7 - best performance in terms of speed compromise bit on quality; TU 4 - balanced approach. 
  • Other than above pointers, I would recommend you to check the performance through file transcoding first and then adding the layer of streaming so that you can visualize where the bottle neck is. 

Our samples are better written in terms of optimization to show the capability, they are length and little hard to begin so we often request customers to start with tutorials(simpler to understand the pipeline) and move to samples for better performance. In your case if you are just making modifications for writing and reading then it will be worth checking performance using sample_multi_transcode

Unfortunately we don't have any example doing live streaming. All our examples are based of file based pipelines. You can use any player - VLC, ffplay or sample_decode to render. I know VLC and sample_decode can use hardware acceleration to decode the o/p to display.

Thanks,
Surbhi 

 

 

View solution in original post

Varun_R_2
Beginner
86 Views

Thanks for the detailed response.

Will check the same as per your recommendations and get back to you.

 

Reply