I want the server to only accept H.264, and I want the MCU to only deliver a H.264 mixed stream. How can I make sure this is happening and see proof of it?
I also remember being able to set the MCU stream bitrate, I can't find this option in console management. Where is it located? This is important to me.
edit: wow https://software.intel.com/en-us/forums/intel-collaboration-suite-for-webrtc/topic/670339
So the feature was removed, to be included in a future version?? This is a very important functionality. Was it given back in 3.2? Will it be back in the next version?, this should be the highest priority.
This is almost unusable without that feature. What bitrate does it output at then at each resolution? I need to know. Is there no way of manually changing it?
And how can I disable audio processing? I don't require audio at this point, and woogeen_audio CPU usage seems to add up with quite a few participants.
I can set "multistreaming" to true, but then I can't find where I set another set of resolution/encodings for that 1 room. Has this feature been removed as well?
The "crop" option seems to crop the tops and bottoms of a 4:3 video when in 720p and 1080p resolution mode. Is there a way to get it to just crop the black bars on the side? I think I remember seeing that feature announced a while back.. I'd like to fit 12 240p 4:3 cameras within a 1280x720 space, but currently "left: 0, top: 0" yields a black bar on the sides.
I wish there was a way to have a dynamic bitrate based on the amount of cameras in the room. My usage case works best when the camera sizing/layout never modifies the camera, i.e. a 240p camera staying as a small 240p video with the rest of the 720p-mix being black/blank until more cameras come on. The black parts don't require bitrate, so I can safely put it down to something very small. It is annoying that currently a completely black/blank mix-stream uses a full bitrate too. Ideally I would set the frame rate to 12-20fps, and the bitrate to 166kbps * amount of cams broadcasting.
It seems the way this is set up currently, I can only fit 9 240p cams in 720p, and even just having 1 cam comes at a cost of around 2000kbps, rather than 166kbps. It is disappointing!
Too many questions. Let me answer them one by one.
1. For mix video stream, no option to disable specific output codec. But as long as no one subscribe it, like VP8, then no VP8 stream will be generated.
2. For mix audio stream, same as video, we can't disable it, but try subscribe video stream only and the audio processing resource should be lower. Also we are working on audio processing node optimization, the efficiency will be improved in next major 3.3 release around December.
3. Microsoft Edge support has been proved to be working internally and showed demo at this year's RT Web Solution event, will be available in next major 3.3 release.
4. If you set multi-streaming to be true, the resolution list are pre-configured, not manually specified, you can check the client SDK document for the pre-configured the resolution list for each base resolution.
5. 'crop' mean crop the video picture to make sure the it fit new ratio without scaling. What do you mean just crop the black bars? The black bar is used to fill the space due to ratio difference. If we crop the black bars, what will be used to fill it? You mean scale the video?
6. Currently the mix video stream choose CBR encoding style to better fit the internet network bandwidth, rather than VBR. And we don't support dynamically bitrate and fps setting during the meeting. Will consider such request in future release. One suggestion for you is to subscribe forward stream when the camera number is small, and switch to mix stream once camera number exceed specific amount.
Thanks for your reply Lei,
1. How do I make sure a browser doesn't subscribe to VP8 then? It is fairly useless now as far as I'm aware, but I'm worried google may still prefer it. Do you know whether vp8 or openH264 encoding performance is better?
3. That's exciting about Microsoft Edge
4. So you mean again, any resolution could be activated as long as a client subscribes to it? Does this mean someone could subscribe to 4K_UHD and there's no way to prevent it other than turning off multi-streaming all together?
5. What I mean is that inside a 1280x720 resolution, you should be able to fit 3 rows of 4 4:3 cameras perfectly. However, "left:0" still places black bars on the sides. If I activate cropping, it crops the top and bottom off to force 16:9, this doesn't help, as I just want the black sides cropped off so that "left:0" will have the 4:3 camera on the edge. The mix resolution should not always have to assume 16:9 cameras or crops to 16:9. Without this feature, I can only fit 9 240p cameras rather than 12, and there's black spaces. Webcams are a situation where people often don't want a wide viewing angle, as the person is usually centered and all that matters to capture.
6. That is very unfortunate! Webcams are unique in that there is often a lot of non-moving parts. A mix stream is supposed to save bandwidth by placing it all together and take advantage of this. Adjusting the bandwidth number with this was my favourite feature. This meant I could provide a low bandwidth mode, and a high bandwidth mode. This meant I could provide an expected level of quality and an expected level of download usage for users. This meant I could squeeze the most efficiency out of my server and know what the appearance was at each bitrate. A higher bitrate also uses a lot more cpu power than a lower bitrate, and I was relying on this.
What I used to do was run the received mix stream through canvas client side, and physically split the cameras up into separate divs. This allowed me to manipulate their layout and size dynamically on the client side to suit any device and view, just as if I was using forward mode, but still with all the bandwidth savings. This meant that there was often a lot of black space on the mix stream (which I would hide) as I kept the camera's true size the same, this just meant when only few cameras they would just get better quality than when there were more cameras on.
Usually, encoding a 1920x1080 stream, containing only 240p worth of pixels with the rest black, would be a fairly low complexity, and I only needed to provide enough bandwidth for what a 240p camera would require. If I could have this scale up in a customisable way based on how many published cameras there are, I would be very happy.
Do certain mix resolutions have a limit on CBR? I think I noticed 720p staying around 2000kbps, but do I have to worry about it getting higher if my network allows it? Some people have bandwidth limits! I find it so strange that this feature would be removed, it is such a critical part. Yes that idea is an okay fix, but it still adds unnecessary complication as when switching to mix mode I will need to make sure it has loaded before switching but the client might not have enough bandwidth to load it well while the forward streams are going too.
New question though: I thought installing intel media server 2016 r1 would be easy on ubuntu. But it is really confusing! And it seems most of the clear instructions on the internet are for the latest 2017 version, which seems to say it doesn't work with core i5 2xxx gpu acceleration. I hope 2016 does..
Is there any easy way of installing intel media server 2016 r1 so I can try out accelerated encoding? I'm not too smart with linux.
edit: I'm also noticing that if I for example have 6 users publishing and receiving, + 7 more receiving, woogeen_access seems to be taking up a ridiculously large amount of cpu and more than the video encoding portion, and making the server run extremely slow.
What exactly is so cpu intensive about just passing on a webrtc stream to clients? No extra videos should be encoded, and no extra videos uploaded. I never would have thought woogeen_access would require more cpu than the video portion, even with many users receiving. Is it just extremely inefficient, or is there a reason woogeen_access can take up more than a video encoding of a room?
edit2: and I can't find any info for this external room stream output to rtsp, where is the documentation?
1. Yes, we can't prevent any client to subscribe VP8 as long as it prefers. We haven't conducted any performance comparison test between VP8 and openH264.
4. We just have pre-configured resolution list for customer to subscribe, which is only activated when subscription. Keep in mind that the pre-configured resolution list is alway lower than the base resolution you specified.
5. The root cause is current Intel CS for WebRTC just support the same resolution among the root window and sub regions.
6. Generally CBR is more suitable for stream transimission than VBR, but definitely can't cover all usages. For CBR, we temporarily disable the mix stream bitrate setting, will add back in next major 3.3 release. Please check the coming v3.2.1 release notes for the workaround solution on mix stream bitrate setting.
[Media SDK] Now media server studio only support CentOS and Suse Linux officially. For Intel CS for WebRTC, we integrate with media server studio and validate on CentOS. Although media server studio provide some instruction to help users to build Ubuntu binary, but we suggest you to use the validated CentOS environment. Keep in mind, only video agent need fit media server studio requirement on OS.
[Access Node] We will release the latest v3.2.1 version inside 1 week, should have improvement on this. Can you please double verify it then and let us you know whether works for you? Thanks!
Regarding to the external room stream output to rtsp/rtmp, we gave simple example in MCU sample application. Please check its code for the detailed usage.