I'm trying to understand if there's a more efficient of doing things than what I'm doing now. Basing myself on the decode samples, I have a single-threaded model like this:
It seems counter-productive to start an asynchronous operation only to wait for its completion immediately after.
So let's imagine a threaded model then:
Thread A (producer):
Thread B (consumer):
Is this any more performant really? We're still calling SyncOperation the exact same number of times, so we're paying the same synchronisation costs. The only thing I can potentially see is if the queue is usually full and it is large enough, then there is a better chance that the decode will have finished by the time we call SyncOperation. But is this worth the extra complexity and inevitable contention on a queue? Nevermind some kind of polling mechanism for Thread B to detect when frames enter the queue, which isn't free either.
You've probably seen this already, but the Media SDK Tutorial has a section on efficient decode. These don't include rendering. I'm not aware of any data that has been collected on the efficiency of rendering from the same thread or different threads, but based on other work with Media SDK the decision for how to implement could be informed by Occam's Razor -- the simplest application code often has the best performance.
I agree with your concerns about adding extra complexity. It seems like multiple threads would add a lot of corner cases, development time, and maintenance costs with relatively minor theoretical benefit. The more complex code may be slower.
Aysnchronous pipeline, even if designed to work in one thread (e.g. like in sample_encode), does give significant performance benefit. So if your app doesn't require each frame immediately after it's been decoded, then asynchronous pipeline is recommended.