- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'm using decode from system memory buffers into system memory buffers, using hardware acceleration. Unfortunately, this gives me NV12 buffers that I then have to convert on the CPU to RGB32 for presentation, which is very slow. Is it possible to have the conversion be performed by the hardware, and what would be the simplest way of doing that?
I have checked the reference manual, and under Hardware Acceleration there is a section explaining that under D3D9 and D3D11, the decoder can have a "Decoder Render Target" of type RGB32, however I don't understand what that entails and how to set it up. Does that require using D3D surfaces as output? Where can you specify that you want RGB32 as output? Is there a way to still automatically convert back to system memory buffers?
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes, using opaque memory (out of decode and into VPP) is a good way to decode to RGB System memory.
Please let me know if you have any issues with this.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
media sdk vpp module,it can nv12 to rgb32.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes, using opaque memory (out of decode and into VPP) is a good way to decode to RGB System memory.
Please let me know if you have any issues with this.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am experiencing this very same issue. Does anyone from Intel have a specific answer to this that is actually helpful?
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page