Exception in defer callback: RangeError: Maximum call stack size exceeded

We are running on the “snap” version of rocketchat 0.66.2, database migration 129. We did have the update to 0.66.1 I believe it was get stuck when upgrading due to the database migration issue. We believe that to be resolved at this point best we can tell.

We are running it on a KVM VM on Debian Stable 9.4, linux kernel 4.9.0-6-amd64 #1 SMP Debian 4.9.88-1+deb9u1. 4GB of RAM.

About 2 weeks ago we started seeing the error below in our logs. At this point, after Rocket Chat has been running for about 1 hour, it starts to bog down and eventually becomes unresponsive. Our only fix we’ve found so far is to restart rocket chat every time this happens. We have about 76 concurrent active users at any given time.

This is the error we are seeing in our logs about once per minute, sometimes more frequent than that. Thanks in advance for any help / suggestions you can offer. At this point our assumption is that our problems are related to this error.

Exception in defer callback: RangeError: Maximum call stack size exceeded at appendToResult (/snap/rocketchat-server/1307/programs/server/packages/minimongo.js:1390:14) at /snap/rocketchat-server/1307/programs/server/packages/minimongo.js:1396:5 at /snap/rocketchat-server/1307/programs/server/packages/minimongo.js:1396:20 at doc (/snap/rocketchat-server/1307/programs/server/packages/minimongo.js:987:32) at match.result.subMatchers.every.fn (/snap/rocketchat-server/1307/programs/server/packages/minimongo.js:902:25) at Array.every (:null:null) at Matcher.docOrBranches [as _docMatcher] (/snap/rocketchat-server/1307/programs/server/packages/minimongo.js:901:32) at Matcher.documentMatches (/snap/rocketchat-server/1307/programs/server/packages/minimongo.js:4262:17) at /snap/rocketchat-server/1307/programs/server/packages/mongo.js:2635:48 at Object.Meteor._noYieldsAllowed (packages/meteor.js:730:12) at OplogObserveDriver._handleDoc (/snap/rocketchat-server/1307/programs/server/packages/mongo.js:2634:12) at /snap/rocketchat-server/1307/programs/server/packages/mongo.js:3104:14 at _IdMap.forEach (/snap/rocketchat-server/1307/programs/server/packages/id-map.js:82:35) at /snap/rocketchat-server/1307/programs/server/packages/mongo.js:3103:18 at Object.Meteor._noYieldsAllowed (packages/meteor.js:730:12) at OplogObserveDriver._publishNewResults (/snap/rocketchat-server/1307/programs/server/packages/mongo.js:3081:12) at OplogObserveDriver._runQuery (/snap/rocketchat-server/1307/programs/server/packages/mongo.js:2995:10) at OplogObserveDriver._runInitialQuery (/snap/rocketchat-server/1307/programs/server/packages/mongo.js:2892:10) at /snap/rocketchat-server/1307/programs/server/packages/mongo.js:2442:10 at /snap/rocketchat-server/1307/programs/server/packages/mongo.js:2286:9 at Meteor.EnvironmentVariable.EVp.withValue (packages/meteor.js:1186:12) at packages/meteor.js:502:25 at runWithEnvironment (packages/meteor.js:1238:24)

We are also using hubot-rocketchat through a docker image for some software integration, in case that also may be related.

Did you resolve like found in Snap migration failed for 0.66.1 ?

What type of things are you doing with hubot? Are you inserting messages? Other than hubot do you have any inbound or outbound integrations?

When the fix was first done late last week for 0.66.1 migration failure, only the first part was done. Just did the second part, here is a copy/paste to be clear.

db.users.update({
‘settings.preferences.groupByType’: { $exists: true }
}, {
$rename: {
‘settings.preferences.groupByType’: ‘settings.preferences.sidebarGroupByType’
}
}, {
multi: true
});

Regarding the hubot, we are inserting messages and reading messages.

We are using API calls to login to RocketChat automatically when logging into a separate internal system.

We don’t have any other inbound or outbound integrations.

Alright, well since no one else had replied, I’ll answer with what we found. We spent a ton of time trying to figure this out / track this down. Our users are logging in via api integration. We didn’t know the tokens were stacking up in the mongo database. Furthermore we have a bot that also logs in via API that had tons and tons of tokens piled up out there. We wiped out all the tokens for all users and so far things seem must smoother / stable.

One link that helped us unravel this is as follows. Hope this information possibly helps someone else.