Hello support team,
My program encode video frames received from a frame grabber. I convert each frame from RGB to NV12. I had to make few changes in the origial sample_encode. The code (at least what I have) originaly encodes YUV frames from a YV12 file while I needed to inject files received from a frame grabber.
I encapsulated nicely the code in a C++ class and created a thread that use the Media API to do the encoding. This works fine.
My simple question is:
Can I use this scheme (Create another instance of the class along with an encoding thread) for an additonal video stream?
I am quite sure I can start multiple encosing sessions that way, just wanted the experts opinion on that.
BTW, is there a code sample more close to what I need (performs encoding of frames in realtime)? I would be happy to get a such
Yes, the approach you've described should work fine. The main thing you will want to watch for in this implementation is thread safety.
The main format conversion and encode loop operations are thread safe. However, initialization might not be. Best practice is to initialize sessions and surfaces for all threads one at a time before launching parallel encode threads.
I agree that a sample like this would be very helpful. For now you have a few options -- add threading to one of the encode tutorials (separating out initialization as outlined above) or modify sample_multi_transcode to add frames to the sink/source surface queue directly from your frame grabber instead of from decode. If you're looking for a quick start the tutorials are a good choice. If you're looking for a rich example with nearly all of the features you're looking for use sample_multi_transcode.
Hope this helps,
Thanks a lot for the very fast reply!
I will follow your advice, first initializing the session and surfaces for the first stream, then the same for the seconds stream and only after that, launching the encoding threads. Hope I understood you correctly...:-)
Many thanks Jeff,