- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I have a hardware H264-encoder PCI-card which produces a baseline profile with different levels depending on
resoultion, frame rate and bitrate.
When I am decoding this stream with QuickSync I experience a alot of different delays depending on which level I choose.
For example let's look at a 1920x1080 30p 8-mbit stream:
(1) Using auto settings on my hardware encoder (Baseline Profile, BP, 66)
- QuickSync decoder wants 18 frames in the surface allocator, resulting delay ~500ms
Now I am hard coding the level (which is basically wrong)
(2) Hard coding to 4.1 (Baseline Profile, BP, 66)
- QuickSync decoder wants 12 frames in the surface allocator, resulting delay ~250ms
(3) Hard coding to 4.0 on my encoder (Baseline Profile, BP, 66)
- QuickSync decoder wants 8 frames in the surface allocator, resulting delay ~120ms
(4) Hard coding to 3.2 on my encoder (Baseline Profile, BP, 66)
- QuickSync decoder wants 6 frames in the surface allocator, resulting delay ~66ms
I mean in case 2,3,4 the bitstream is actually wrong but decodes faster and renderes properly with no pixelation. I am using the decoder with AsyncDepth = 1, D3D11, and allocating mfxFrameAllocRequest::NumFrameSuggested in my ID3D11Texture2D pool.
- Is this expected behaviour?
- Is it possible to decode a baseline profile, level 5.0 and higher with 2 frames delay, for this type of stream?
- Is it possible to minimize frame caching by respoding to an Alloc() request with less surfaces than mfxFrameAllocRequest::NumFrameSuggested? I guess I always have to honor the mfxFrameAllocRequest::NumFameMin parameter.
br,
Carl
Link Copied
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page