Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
223 Views

Automatic Scaling

Hi guys, I want to know the possibility of deploying IntelCS for webrtc in an automatic scaling distributed system such as kubernetes, thanks in advanced.

Tags (2)
0 Kudos
9 Replies
Highlighted
223 Views

Theoretically, it is possible to auto deploy webrtc agent in K8S with host network. Since webrtc agent requires a large range port to open for peerconnections, but in current K8S implementation, we cannot specify a range of ports in yaml, so specifying ports one by one in yaml files is not operable.

0 Kudos
Highlighted
Beginner
223 Views

Thanks for the answer!

So that means the exclusive way to server large among of concurrent conferences is using of Docker swarm, do I understand correctlly?

0 Kudos
Highlighted
223 Views

For common scenario, one webrtc agent on one node is enough, so you can use K8S to deploy your application on n nodes with n webrtc agents of host network. BTW, could you provide the common scenarios of your application and deployment scale size so that we can evaluate and provide better suggestions.

0 Kudos
Highlighted
Beginner
223 Views

Use case :

we want to arrange at most 200 concurrent rooms of video chat, each room has at most 4 participants, all of the conferences will be recorded into a single destination, our computing resource is supposed to be scalable to optimize the cost.

Architecture:

Screen Shot 2019-01-23 at 2.56.03 PM.png

I want to design by this model, we have:

- a single virtual machine run cluster manager, nuve, rabittMQ, mongoDB

- a Kubernetes cluster run multiple nodes, each node contains: portal, conference, webrtc, audio, video agent

- a single virtual machine contains a recording agent and responsible for store video in a file system directory, we mount this directory to Amazon S3

Is that a possible and optimal design? Thanks in advance!

0 Kudos
Highlighted
223 Views

Hi, Samuel, thanks for sharing your design model. From this model, you need to make sure 2 virtual machines and k8s cluster can communicate with each other, for all  components' communication are through rabbitmq, and also recording agent needs to communicate with streaming agent. In fact, for mixer scenarios, video agent requires more CPU resources, while for broadcasting scenarios, access agents (like webrtc agent, streaming agent) requires more bandwidth. So you need to optimize the deployment for different modules according to different testing scenarios, like MCU mode, or SFU mode, different video resolutions and other video parameters.

0 Kudos
Highlighted
Beginner
223 Views

I know that you have deployed cluster on AWS with docker container, how do you specify following parameters for public access when you package a docker container. Additionally, I am using google cloud which is located behind NAT environment

webrtc.network_interfaces(webrtc_agent/agent.toml)

portal.ip_address(portal/portal.toml)

0 Kudos
Highlighted
Beginner
223 Views

Probably, I have implemented the above architecture in Google cloud platform with docker and Group of instances use case, the infrastructure is able to scale up and down the number of instances based on the CPU utilization, I haven't tested in production yet but it is a big step forward for us, thanks to ICS team again !

0 Kudos
Highlighted
Beginner
223 Views

Hi, 

 

I have configured mcu 4.3 on aws default VPC with cluster model. I mean webrtc and audio-video agent started in seperate machine and then other agents are started on another machine. It's working fine.

Now I need to autoscale the webrtc and audio-video agent with autoscale process. Is it possible?If, yes , how can do it? Please help me

0 Kudos
Highlighted
Beginner
213 Views

Hello Samuel,

Good evening and hope you deployed your cluster and auto-scaling successfully. Could you please explain how you deployed cluster? What are the agents that you deployed in various instances? It would be really great help to us.

Thank you.

Best Regards,
Chandramouli.

0 Kudos