Description
Four of the users in my workspace are always shown as online (i.e. with the green light) even when they are not and all their devices are turned off. This also means they will never be notified on mobile, as the server thinks they’re at their device when they are not.
Server Setup Information
- Version of Rocket.Chat Server: 4.5.2
- Operating System: Alpine Linux
- Deployment Method: Docker
- Number of Running Instances: 1
- DB Replicaset Oplog: ?
- NodeJS Version: v14.18.3
- MongoDB Version: 4.4.12 / wiredTiger
- Proxy: Nginx
- Firewalls involved:
Any additional Information
There are no errors in the logs, and I have no idea how to solve it. It started a few days ago, after the 4.5.2 update. Thanks.
Hi, I’d like to add that I believe the status system in my Rocket.Chat instance is completely broken.
I created a new user to test this issue and instantly when I logged in it’s status was set to “offline”. I then set it to “online” and now when I log out the test user shows as “online”.
I am very confused with this sudden and unexpected issue. Could anyone lend their knowledge? It is causing a great disorganization in my team.
Hi!
Can you check if the auto away is enabled? This one setting that may interfere on this behavior, apart from other underlying problems.
You should be able to open up two different browsers, and while changing the presence of one of those users, it reflects into the channel user list of a common channel, por example.
Each user will have it’s own configuration (Avatar > My Account > Preferences > User Presence). being the default for new users defined in Admin > Account > Default User Preferences
Hi, thanks for your response. I checked auto away, but it’s still set to the default of 300 and switched on.
Opening two browsers and changing the status does make the status change on the other browser, but when the user goes offline, the status sticks as it was. If the user changed to do not disturb, it stays like that even when the user is offline. It seems as though no matter what, no one can go offline.
So when a user logs off, it does change to offline, for other users, right?
The difficult part of this would be consistently replicating the issue.
Can you replicate this on a clean, fresh new install in docker?
When someone logs out or shuts down their device, their account remains with the status they had set last. i.e. if they had manually set it to idle, they would stay idle when they should be showing as offline.
As for recreating the issue, I was unable to get a fresh install to act in the same way. The issue just seemed to appear with no warning. The logs show no symptoms of an issue.
Do you have any recommendation which could solve this? Thank you.
Hi, I set the logging to Level 2 and got this:
"level":35,"time":"2022-03-23T20:10:14.606Z","pid":9,"hostname":"[/]","name":"Meteor","method":"UserPresence:away","userId":"[/]","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.83 Safari/537.36","remoteIP":"[/]","instanceId":"[/]"}
I removed potentially private information with “[/]”.
I believe this is a connected app of a user activating the auto away feature, which occurs after user inactivity, but the status does not change on the server. It stays as “online”. So, I think this means the apps are working fine, but the server isn’t responding to the instruction for some reason.
One thing that may interfere here is your firewall.
There was a case here on forums where the firewall was blocking some of the requests (WAF Rules) and causing unexpected behaviors.
I checked my firewall, but I don’t see anything from Rocket.Chat that is being blocked.
Going by the log entry, I think the server receives the “away” statuses fine, but just doesn’t set the status to what it was told.
Is there any way to “refresh” my Rocket.Chat instance so as to keep the data and messages but wipe out the app and start fresh? Or could I manually change the database entry of the status somehow? Thank you.
Today I found that the database container is repeatedly crashing every minute or so. This must be the cause of the issue, but I do not know how to troubleshoot it. The error I get in the database container logs is:
2022-03-29T08:22:00.904466083Z {"level":30,"time":"2022-03-29T08:22:00.904Z","pid":11,"hostname":"9d7375a5e9e2","name":"SyncedCron","msg":"Exception running scheduled job MongoNetworkError: connection 236 to 172.28.0.3:27017 closed\n at Connection.handleIssue (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/cmap/connection.js:129:15)\n at Socket.<anonymous> (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/cmap/connection.js:62:35)\n at Socket.emit (events.js:400:28)\n at Socket.emit (domain.js:475:12)\n at TCP.<anonymous> (net.js:686:12)\n at TCP.callbackTrampoline (internal/async_hooks.js:130:17)"}