- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We have configured MCU 4.2.1 on CentOS Linux release 7.6.1810 (Core). We have also using default cluster management strategies .
Today we have got subscribing error in all MCU rooms. We found the following server logs at that time.
Please help me to clarify this issues.
Logs :
Conference agent
cat conference-7bc642d23b45acf406b1@78.46.185.95_13.log
2019-12-20 09:00:58.670 - DEBUG: WorkingNode - No native logger for reconfiguration
2019-12-20 09:00:58.742 - INFO: WorkingNode - pid: 7323
2019-12-20 09:00:58.743 - INFO: WorkingNode - Connecting to rabbitMQ server...
2019-12-20 09:00:58.762 - INFO: AmqpClient - Connecting to rabbitMQ server OK, options: { host: 'localhost', port: 5672 }
2019-12-20 09:00:59.074 - INFO: WorkingNode - conference-7bc642d23b45acf406b1@78.46.185.95_13 as rpc server ready
2019-12-20 09:00:59.078 - INFO: WorkingNode - conference-7bc642d23b45acf406b1@78.46.185.95_13 as monitor ready
2019-12-20 09:22:39.667 - INFO: AccessController - Fault detected on node: { agent: 'webrtc-617e25583c2c397da4a7@78.47.225.244',
node: 'webrtc-617e25583c2c397da4a7@78.47.225.244_64' }
2019-12-20 09:22:39.671 - INFO: AccessController - Fault detected on node: { agent: 'webrtc-617e25583c2c397da4a7@78.47.225.244',
node: 'webrtc-617e25583c2c397da4a7@78.47.225.244_64' }
2019-12-20 09:22:43.898 - WARN: AmqpClient - Late rpc reply: { data: 'ok', corrID: 566, type: 'callback' }
2019-12-20 09:22:43.899 - WARN: AmqpClient - Late rpc reply: { data: 'ok', corrID: 567, type: 'callback' }
2019-12-20 09:32:24.423 - INFO: AccessController - onFailed, sessionId: 104757539801329180 reason: Ice procedure failed.
2019-12-20 09:32:24.483 - INFO: AccessController - onFailed, sessionId: 229937816174673470 reason: Ice procedure failed.
2019-12-20 09:32:24.503 - INFO: AccessController - onFailed, sessionId: 191819364333277730 reason: Ice procedure failed.
2019-12-20 09:32:33.423 - INFO: AccessController - onFailed, sessionId: 405710256987966000 reason: Ice procedure failed.
2019-12-20 09:42:00.169 - ERROR: RoomController - Rebuid video mixer failed, reason: Failed in scheduling video worker, reason: No worker available, all in full load.
2019-12-20 09:42:00.169 - ERROR: RoomController - Rebuid video transcoder failed, reason: Failed in scheduling video worker, reason: No worker available, all in full load.
/root/Release-v4.2/conference_agent/roomController.js:1
(function (exports, require, module, __filename, __dirname) { "use strict";var _typeof="function"==typeof Symbol&&"symbol"==typeof Symbol.iterator?function(e){return typeof e}:function(e){return e&&"function"==typeof Symbol&&e.constructor===Symbol&&e!==Symbol.prototype?"symbol":typeof e},assert=require("assert"),logger=require("./logger").logger,makeRPC=require("./makeRPC").makeRPC,log=logger.getLogger("RoomController");function isResolutionEqual(e,i){return e.width&&i.width&&e.height&&i.height&&e.width===i.width&&e.height===i.height}module.exports.create=function(e,i,o){var r,t={},u=e.cluster,d=e.rpcReq,k=e.rpcClient,s=e.config,P=e.room,g=e.selfRpcId,R=s.transcoding&&!!s.transcoding.audio,_=s.transcoding&&!!s.transcoding.video,p=s.internalConnProtocol,C={},I={},S={},c=((r={}).video={encode:s.mediaOut.video.format.map(H),decode:s.mediaIn.video.map(H)},s.mediaOut.video.format,s.views.forEach(function(e){e.video.format&&r.video.encode.push(H(e.video.form
Error: Rebuild video mixer failed.
at Timeout._onTimeout (/root/Release-v4.2/conference_agent/roomController.js:1:24912)
at ontimeout (timers.js:498:11)
at tryOnTimeout (timers.js:323:5)
at Timer.listOnTimeout (timers.js:290:5)
Video/Audio agent
2019-12-20 09:42:00.159 - WARN: ClusterManager - schedule failed, purpose: video task: 59967585275940571421 reason: No worker available, all in full load.
2019-12-20 09:42:00.168 - WARN: ClusterManager - schedule failed, purpose: audio task: 56302511699781820909 reason: No worker available, all in full load.
2019-12-20 09:56:14.465 - INFO: ClusterManager - Worker video-57c07d0cca4fedf04daa@78.47.226.36 is not alive any longer, Deleting it.
2019-12-20 09:56:28.472 - INFO: ClusterManager - Worker audio-e73ae887bcf527f764a1@78.47.226.36 is not alive any longer, Deleting it.
- Tags:
- HTML5
- JavaScript*
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, according to the logs, please check the CPU load on your server.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page