- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
In my app I'm going to establish a WebRTC session with participants sharing a virtual room with a virtual background. I'll of course be using the user segmentation API to do the background subtraction.
Can someone give me the general steps for setting this up? I'm not looking for detailed instructions, just the top level steps/API features I need to involve to make this happen. I know I can get the frames with the user segmented and deliver them to the WebRTC connection, after merging in the background of my choosing. But:
1) Is there anything more high level in the SDK that might do some of the "heavy lifting"?
2) What is the general technique for merging the individual user segmented frames from different physical locations into a merged stream that can be multicasted over a WebRTC session?
Robert.
Link Copied

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page