Separate RC database for multiple tenants


I am trying rocketchat in a multitenant architecture. I came across a problem wherein I had to have all 60 collections of rocketchat inside each tenant db. When the server is starting, the rocketchat db is configured based on the env variable MONGO_URL. By this way I was able to have chats of multiple tenants in a single database. At the end of the day, having a single common database for chat is not what I want. I am trying to isolate each tenant chat from one another. i.e., either by having separate chat db for each tenant or dumping rocketchat collections inside each tenant.
Is there a way to do either of those?

Server Setup Information

  • Version of Rocket.Chat Server: 3.1.0-develop
  • Operating System: Linux
  • Deployment Method: Kubernetes (Docker)
  • Number of Running Instances: 1
  • DB Replicaset Oplog: Enabled
  • NodeJS Version: 12.14.1 - x64
  • MongoDB Version: 4.2.5
1 Like

Multi-tenant we’d do something like:




Etc. Each of course getting own address and instance running

1 Like

But this has to be defined before server running right?! in the deployment file.
In my case I won’t be knowing the tenant_id before the tenant makes its first connection
Is there a way for me to define like this


where value for TENANT_ID would be set after the tenant make its connection

By this you mean separate instance for each and every tenant.
Am I right?

If it were me and I was building something like this to support multiple customers. I would not even start the resources until you know the tenant id. Starting a new Rocket.Chat instance doesn’t really take that much time.

So i’d put something in your process that is hit first that allows you to get the tenant_id and then trigger resource creation that way you never even start Rocket.Chat instance for that tenant until you know who it is.

Wether this is some simple landing page that is hit and says “Preparing for first use” or a registration process.

so do you mean to start the server once the tenant_id gets available. Is that correct?

Also in terms of kubernetes context, for scaling will that be good if we have 1 tenant for each pod? Since deployment config is common for all pods, there won’t be any pod specific config so we can’t have different mongo_url for each pod(tenant).

I’m not sure I understand why you want / need to share deployment config and then want to change per pod?

Speaking from experience this is how I would do a multi tenant setup on k8s.

  1. Some system to establish the tenant.
  2. Create new deployment(in k8s terms) with its own mongo_url pointing to the tenant I’d
  3. Have your system check for it to come up.
  4. Take the user there. :slight_smile:

i think we only need get settings base on tenant_id.
when start app, will get all settings of all tenants, and then after user login we will have tenant_id that can use to get settings from DB/Cache.