OpenID/Keycloak Authentication failing because of mongo function "remove" not available

Description

My issue consists of the following error message inside the rocket.chat pod logs:

{"level":50,"time":"2025-01-28T08:35:54.637Z","pid":1,"hostname":"rocketchat-server-0","name":"System","msg":"Exception while invoking method login","err":{"type":"Error","message":"remove + is not available on the server. Please use removeAsync() instead.","stack":"Error: remove + is not available on the server. Please use removeAsync() instead.\n at Object.ret.<computed> [as remove] (packages/mongo/remote_collection_driver.js:53:15)\n at Collection.remove (packages/mongo/collection.js:1016:29)\n at Collection.Mongo.Collection.<computed> [as remove] (packages/dispatch_run-as-user.js:346:17)\n at Object.OAuth._retrievePendingCredential (app/2fa/server/loginHandler.ts:88:29)\n at processTicksAndRejections (node:internal/process/task_queues:95:5)\n at MethodInvocation.<anonymous> (packages/accounts-oauth/oauth_server.js:18:18)\n at packages/accounts-base/accounts_server.js:593:9\n at tryLoginMethod (packages/accounts-base/accounts_server.js:1560:14)\n at AccountsServer._runLoginHandlers (packages/accounts-base/accounts_server.js:592:22)\n at AccountsServer.Accounts._runLoginHandlers (app/lib/server/lib/loginErrorMessageOverride.ts:9:17)\n at MethodInvocation.methods.login (packages/accounts-base/accounts_server.js:654:22)"}}

The error throws as soon as I try to login in in the Keycloak login interface on my Rocket.Chat instance. When checking on the session inside the Keycloak admin interface I can in fact see that the session exists and is active, so the login technically works.

Nevertheless, I get redirected back to https://rocketchat.local/home where I see the login screen and inside the logs there is the error from above.

What have I tried / found out

I did research this problem for the last day and I have found this link to the GitHub issues discussion for Rocket.Chat https://github.com/RocketChat/Rocket.Chat/issues/34184.

For the guy in the issue thread the problem arose because of a typo in his userinfo endpoint configuration. I have already checked those configs multiple times and I do not have any of them wrongly configured . Proof of that is that the Keycloak login session is active like I described above.

I read the official documentation carefully and followed it during the setup process at https://docs.rocket.chat/docs/openid-connect-keycloak

I also have found and used another guide here (I do not use Gazelle but I have oriented myself in this guide about the Keycloak configuration inside Rocket.Chat in general): https://validation.sequoiaproject.org/gazelle-documentation/Gazelle-Keycloak/rocketchat.html#top

Then I have informed myself about the remove() and removeAsync() functions and I found that they are already deprecated and instead one should use deleteOne() and deleteMany().

As a next step I tried to downgrade my MongoDB so that those old functions can be used again so I saw that my Rocket.Chat instance would be compatible with MongoDB 5.x. I have pulled such image and found out that the remove() and removeAsync() are already deprecated for that version too. Strangely, those functions do exist - so they can be used but MongoDB will give you a warning about them being deprecated, but for whatever reason the error tells me that those functions do not exist at all …

When it comes to compatibility I have seen inside the release notes of my Rocket.Chat version that I am compatible with the versions of MongoDB and Node that I am using here https://github.com/RocketChat/Rocket.Chat/releases/tag/7.2.1

Server Setup Information

  • Version of Rocket.Chat Server: 7.2.1
  • Operating System: Linux (Docker Image)
  • Apps Engine Version: 1.48.1
  • Deployment Method: Kubernetes / Docker
  • Number of Running Instances: 2
  • DB Replicaset Oplog: Yes (1 member inside the replication set)
  • NodeJS Version: v22.11.0
  • MongoDB Version: 6.0.13 / wiredTiger (oplog Enabled)
  • Proxy: nginx
  • Firewalls involved: no

Thanks for the well documented post.

A couple of things.

Don’t try downgrading DBs. It is a recipe for disaster. The only secure way to go back is restore from a backup with the right Rocket version matched to the Db. Rocket makes a lot of DB changes through versions in the background it is easy to get in a mess.

The error is likely to be a red herring in some respects as you seem to have discovered with changing DB versions.

Next you say you are just trying to set this up or can you give some history please?

If this is a new install I would suggest you start with Mongo 7 - it is the latest supported. Set it in your .env file.

Either way, don’t mess about with your DB - it should ‘just work’ if you have the settings correct, and per that bug the most common issue is an incorrect setting. You end up trying so many things you get ‘blind’ to a tiny mistake!

Sometimes a clean start is better.

Number of Running Instances: 2

Why 2?

What licence type? Approximately how many users?

I strongly suspect that is a config issue somewhere but we just need to find out where.

Thanks for the fast reply.

Well I thought that I could use 2 instances because I will have a decent amount of users using that Rocket.Chat. But if this does complexify the problem in your opinion I can try with only one.

Licence type is: Starter.
Estimated users are about 100 or so.

To clarify: I am currently engineering a Rocket.Chat integration using Kubernetes. That means that I have no production data to worry about - I can wipe the database and the application as many times as I want. In the end I want to have a normal instance of Rocket.Chat with a Keycloak login.

For that reason I followed your tip of not messing around with the database and deleted the whole thing. I pulled the following MongoDB image: bitnami/mongodb:7.0.15. The reason for that is if I don’t use the bitnami one then the replica set won’t get initialised by only setting the MONGODB_REPLICA_SET_MODE and MONGODB_REPLICA_SET_NAME variables (don’t know why tho).

→ Anyway the same error still persists even after using the newest MongoDB that is supported.
Where do you think the error could be coming from? In my opinion it can only have something to do with the Keycloak OAUTH configuration.

Which variable do I need to use for setting the MongoDB version? I cannot find it inside the docs …

Edit: Additionally, knowing that it has to be a configuration error I can also say that I do really little configuration. As I said I wiped the whole thing and made a fresh “install” by rebuilding the pods. After that I log into Rocket.Chat and the only thing that I do is setting up the Keycloak OAUTH service. All the other settings remain at default value.

Edit 2: After some further investigation I saw that the error stack trace shows that during the login attempt, the system tries to execute the method OAuth._retrievePendingCredential. This function seems to be part of the OAuth authentication flow, which likely works in conjunction with 2FA to validate the user’s credentials. The error occurs while this process is running, and it involves interacting with a MongoDB collection (which is where the issue with remove arises).

I remember that in the official guide / documentation (https://docs.rocket.chat/docs/openid-connect-keycloak) in point 7 it says that I should disable two factor authentication, which I did but with no effect - error still persists.

Another thing that I randomly came across was inspecting the JWT token that gets exchanged during the authentication process. When inspecting the network traffic inside the browsers dev tools I do not see any real exchange between the applications. I know this because I have other applications that use Keycloak authentication and the network traffic always contains a request that starts with authenticate?session_code=... but in this case there is nothing. This might be the reason for that part of the error stack trace where it says OAuth._retrievePendingCredential.

This might just be coincidences that let me believe that the error is caused by the authentication failing, one way or the other I certainly have made a misconfiguration and I try to give you as much information as possible.

Starter is free for 50 users. More than that and you will have to speak to sales or use CE.

You do not need two for this. It adds complexity because you need a HA type setup and I doubt you’ll need that to start with.

A very rough guide here:

https://docs.rocket.chat/docs/system-requirements

I don’t understand. One moment you are saying you are using 7.x and then asking which var t use?

Have you actually read the yml and env files?

.env

#MONGODB_VERSION=
# See:- https://hub.docker.com/r/bitnami/mongodb

compose.yml

  mongodb:
    image: docker.io/bitnami/mongodb:${MONGODB_VERSION:-6.0}

Note for Kubernetes:

https://docs.rocket.chat/docs/deploy-with-kubernetes

Yup as I said at the start it almost certainly it is configuration. Go back, start again, set it up correctly with disabled 2FA etc and try again.

I saw that, but this is not a problem. Thanks for the heads up anyway.

I have now changed the replicas to 1 and I use the Type: Deployment instead of Type: ReplicaSet. While it makes sense it did nothing about the original problem - but that was expected.

I am sorry for the confusion. I use yaml files and of course I have adjusted the mongoDB version tag. I was just confused about the .env file you were referencing because I have the environmental variables set in a ConfigMap and not an .env file

I guess this is all you can do. I will post the solution here if I get to it.

Thanks for your help. Much appreciated.

NP - let us know how you get along.