Theoretically, it is possible to auto deploy webrtc agent in K8S with host network. Since webrtc agent requires a large range port to open for peerconnections, but in current K8S implementation, we cannot specify a range of ports in yaml, so specifying ports one by one in yaml files is not operable.
Thanks for the answer!
So that means the exclusive way to server large among of concurrent conferences is using of Docker swarm, do I understand correctlly?
For common scenario, one webrtc agent on one node is enough, so you can use K8S to deploy your application on n nodes with n webrtc agents of host network. BTW, could you provide the common scenarios of your application and deployment scale size so that we can evaluate and provide better suggestions.
Use case :
we want to arrange at most 200 concurrent rooms of video chat, each room has at most 4 participants, all of the conferences will be recorded into a single destination, our computing resource is supposed to be scalable to optimize the cost.
I want to design by this model, we have:
- a single virtual machine run cluster manager, nuve, rabittMQ, mongoDB
- a Kubernetes cluster run multiple nodes, each node contains: portal, conference, webrtc, audio, video agent
- a single virtual machine contains a recording agent and responsible for store video in a file system directory, we mount this directory to Amazon S3
Is that a possible and optimal design? Thanks in advance!
Hi, Samuel, thanks for sharing your design model. From this model, you need to make sure 2 virtual machines and k8s cluster can communicate with each other, for all components' communication are through rabbitmq, and also recording agent needs to communicate with streaming agent. In fact, for mixer scenarios, video agent requires more CPU resources, while for broadcasting scenarios, access agents (like webrtc agent, streaming agent) requires more bandwidth. So you need to optimize the deployment for different modules according to different testing scenarios, like MCU mode, or SFU mode, different video resolutions and other video parameters.
I know that you have deployed cluster on AWS with docker container, how do you specify following parameters for public access when you package a docker container. Additionally, I am using google cloud which is located behind NAT environment
Probably, I have implemented the above architecture in Google cloud platform with docker and Group of instances use case, the infrastructure is able to scale up and down the number of instances based on the CPU utilization, I haven't tested in production yet but it is a big step forward for us, thanks to ICS team again !
I have configured mcu 4.3 on aws default VPC with cluster model. I mean webrtc and audio-video agent started in seperate machine and then other agents are started on another machine. It's working fine.
Now I need to autoscale the webrtc and audio-video agent with autoscale process. Is it possible?If, yes , how can do it? Please help me