- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I have been trying to setup a cluster and that part seems working together fine. It is based on Release v3.2. Although the issue described below also happend on Release v3.1 and v3.1 Update 2
But when I'm using multiple nodes, and audio-agent and video-agent end up on different back-end servers I'm missing either audio, and or video is showing as black. Depending on the node that the client is using. I have done this test with the supplied Basic example.
In consists of 3 nodes, all nodes are running CentOS
Node 1: Is running rabbit-mq and the mongodb
Node 2: Is running: nuve cluster-manager portal session-agent webrtc-agent avstream-agent sip-agent recording-agent audio-agent video-agent sip-portal app
Node 3: Is running: portal session-agent webrtc-agent avstream-agent sip-agent recording-agent audio-agent video-agent sip-portal
So as per installation documentation, the "One or many" services are also started on the Node 3.
I've seen 3 different scenarios happening, of which 2 are pretty useless:
1: When I connect with several clients, all Chrome latest version, the video streams are mixed properly together, and sometimes also audio.
Observed on node-2 I see: 3 processes active: audio-agent. video-agent and access agent. On node-3 I see no processes related to audio and video
2: But it is also possible that when I connect on a later time again, i see the video and have no audio.
Observed on node-2 I see: 2 processes active: video-agent and access agent. On node-3 I see audio-agent active
3: And I've also experienced, that all audio is working fine, but video ends up with a black image.
Observed on node-2 I see: 2 processes active: audio-agent and access agent. On node-3 I see video-agent active
I've not even tested recording yet, but my guess is that the result can be also the same, or even nothing, if the recording agent starts on different node.
I hope you can help me solve this issue.
Kind regards,
Mark
- Tags:
- HTML5
- JavaScript*
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
No updates on this?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I think firstly we need to know why the components on node 2 and node 3 can't be start up correctly, just some. Do all configurations work well? Maybe you can attach the zipped log folder for a check.
By the way, only one sip-portal is accepted.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I can start all the components correctly on all the nodes, that is not really the problem.
The problem is in the fact, that a room can have audio running on node 1, and the video running on node 2. In that situation, either there is no video, or there is no audio.
It seems that there is no way to control that the audio and video of one room will end up on the same node.
sip-portal I am not using, so that is out of my scope, I'm only having the issues with the MCU.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Let's make the problem simpler firstly to address your concern on separate audio and video agent to different machines. This can help narrow down the problem. If this simple situation still not work, share us about the zip file on each server node.
Node 1: Is running: rabbit-mq mongodb nuve cluster-manager portal session-agent webrtc-agent avstream-agent sip-agent recording-agent sip-portal app
(Comment out audio agent and video agent start script in start-all.sh)
Node 2: Is running: audio-agent
(Comment out all other components start scripts except audio agent in start-all.sh)
Node 3: Is running: video-agent
(Comment out all other components start scripts except video agent in start-all.sh)
Make sure your edit the video agent and audio agent toml files for correct rabbit-mq server and cluster ip/erthernet configuration.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
My text disappeared in the previous message, the log files are attached in that.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
From the log, we din't find any abnormal on the control logic. We guess maybe there is some firewall issue the prevent streams flow across the webrtc-agent, video-agent/audio-agent. Please check the ip/erthernet configuration on the [cluster] section of webrtc-agent, video-agent, audio-agent, make sure the TCP connections among them are open.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Firewall can be an issue, I think. I'll check if there is any firewall and then will test with all ports open between the servers in the cluster.
Is there any documentation on which ports clustered servers are using for communications? I can't find this in the current documentation.
It seems also not configurable, if that would become possible in a future update it would be very welcome.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes, thanks for your suggestion. Now we use system allocated default TCP ports to transmit streams across nodes internally, should be the big ports. We will try to make it configurable and document it well in next major release.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If I open all the network ports between the nodes it seems to be working. So there was the issue.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page