Moved to v6 and now facing lag

Description

Prior to moving to v6 everything ran flawlessly. This isnt a constant issue but it happens a lot. When sending a message (private or to a channel) it will take a second or two for the message to load. Uploading images now takes a long time.

I do not know how to check the logs inside docker to see if there is any issues popping up.

Server Setup Information

  • Version of Rocket.Chat Server: 6.0.0
  • Operating System: Ubuntu Latest
  • Deployment Method: docker
  • Number of Running Instances: 1
  • DB Replicaset Oplog: MongoDB 5.0.15 / wiredTiger (oplog Enabled)
  • NodeJS Version: v14.21.2
  • MongoDB Version: MongoDB 5.0.15 / wiredTiger (oplog Enabled)
  • Proxy: nginx
  • Firewalls involved: disabled

Any additional Information

Hi! Welcome back :slight_smile:

You can check the logs from WORSPACE > LOGS, inside Rocket.Chat

or, go to where you rocket.chat docker-compose.yml i and issue:

docker compose logs -f --tail 10

also, you can check the resource comsuption by issuing:

docker stats

this can give you a clue about the resource usage for each container.

I used the log command docker compose logs -f --tail 10 and it seems like the bot I had installed (Remind Me) was causing some issues. I removed that but did get this message in the logs

rocketchat_1  | This instance could not remove the Add Reminder app package. If you are running Rocket.Chat in a cluster with multiple instances, possibly other instance removed the package. If this is not the case, it is possible that the file in the database got renamed or removed manually.

How can I make sure that this was completely removed?

Also I am getting these 2 messages also

mongodb_1 | {“t”:{“$date”:“2023-03-15T20:20:55.933+00:00”},“s”:“I”, “c”:“COMMAND”, “id”:51803, “ctx”:“conn36”,“msg”:“Slow query”,“attr”:{“type”:“command”,“ns”:“parties.rocketchat_uploads.chunks”,“command”:{“find”:“rocketchat_uploads.chunks”,“filter”:{“files_id”:“qvNKCMSTC6vbjTbfC”},“sort”:{“n”:1},“limit”:0,“lsid”:{“id”:{“$uuid”:“0991f0d0-08a0-43e9-b366-bd4430e8a746”}},“$clusterTime”:{“clusterTime”:{“$timestamp”:{“t”:1678911648,“i”:1}},“signature”:{“hash”:{“$binary”:{“base64”:“AAAAAAAAAAAAAAAAAAAAAAAAAAA=”,“subType”:“0”}},“keyId”:0}},“$db”:“parties”},“planSummary”:“COLLSCAN”,“keysExamined”:0,“docsExamined”:12514,“hasSortStage”:true,“cursorExhausted”:true,“numYields”:30,“nreturned”:11,“queryHash”:“9FE7AF15”,“planCacheKey”:“7D04618B”,“reslen”:2765910,“locks”:{“FeatureCompatibilityVersion”:{“acquireCount”:{“r”:31}},“Global”:{“acquireCount”:{“r”:31}},“Mutex”:{“acquireCount”:{“r”:1}}},“storage”:{“data”:{“bytesRead”:235442021,“timeReadingMicros”:115614}},“remote”:“192.168.0.8:48654”,“protocol”:“op_msg”,“durationMillis”:423}}

mongodb_1 | {“t”:{“$date”:“2023-03-15T20:21:20.419+00:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“Checkpointer”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1678911680:419460][1:0x7f4f41c9f700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 131168948, snapshot max: 131168948 snapshot count: 0, oldest timestamp: (1678911380, 2) , meta checkpoint timestamp: (1678911680, 2) base write gen: 2254800”}}

Looks like mongo is using a ton of ram

NAME CPU % MEM USAGE / LIMIT MEM %
rocketdocker_rocketchat_1 26.83% 677.1MiB / 31.13GiB 2.12%
rocketdocker_mongodb_1 4.58% 16.74GiB / 31.13GiB 53.79%

I did the logs and found that the reminder bot I had installed was causing some issues. I removed that and I will see if that helps with the lag issues.

Are these 2 issues that I need to look into as well?

mongodb_1 | {“t”:{“$date”:“2023-03-15T20:20:55.933+00:00”},“s”:“I”, “c”:“COMMAND”, “id”:51803, “ctx”:“conn36”,“msg”:“Slow query”,“attr”:{“type”:“command”,“ns”:“parties.rocketchat_uploads.chunks”,“command”:{“find”:“rocketchat_uploads.chunks”,“filter”:{“files_id”:“qvNKCMSTC6vbjTbfC”},“sort”:{“n”:1},“limit”:0,“lsid”:{“id”:{“$uuid”:“0991f0d0-08a0-43e9-b366-bd4430e8a746”}},“$clusterTime”:{“clusterTime”:{“$timestamp”:{“t”:1678911648,“i”:1}},“signature”:{“hash”:{“$binary”:{“base64”:“AAAAAAAAAAAAAAAAAAAAAAAAAAA=”,“subType”:“0”}},“keyId”:0}},“$db”:“parties”},“planSummary”:“COLLSCAN”,“keysExamined”:0,“docsExamined”:12514,“hasSortStage”:true,“cursorExhausted”:true,“numYields”:30,“nreturned”:11,“queryHash”:“9FE7AF15”,“planCacheKey”:“7D04618B”,“reslen”:2765910,“locks”:{“FeatureCompatibilityVersion”:{“acquireCount”:{“r”:31}},“Global”:{“acquireCount”:{“r”:31}},“Mutex”:{“acquireCount”:{“r”:1}}},“storage”:{“data”:{“bytesRead”:235442021,“timeReadingMicros”:115614}},“remote”:“192.168.0.8:48654”,“protocol”:“op_msg”,“durationMillis”:423}}

mongodb_1 | {“t”:{“$date”:“2023-03-15T20:21:20.419+00:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“Checkpointer”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1678911680:419460][1:0x7f4f41c9f700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 131168948, snapshot max: 131168948 snapshot count: 0, oldest timestamp: (1678911380, 2) , meta checkpoint timestamp: (1678911680, 2) base write gen: 2254800”}}

this slow query message… looks like it is having a hard time finding this file_id qvNKCMSTC6vbjTbfC upload chunk.

The second log message doesn’t look like a problem.

Let me know if this helps!

Thanks @dudanogueira I have everything resolved. I used this thread to get the database files resolved

1 Like