Run several Rocket Chat servers on one K8s cluster

Description

Server Setup Information

  • Version of Rocket.Chat Server: 2.4.9
  • Operating System: Ubuntu 18
  • Deployment Method: Helm/Kubernetes
  • Number of Running Instances: 1
  • DB Replicaset Oplog:
  • NodeJS Version:
  • MongoDB Version:
  • Proxy:
  • Firewalls involved:n/a

Any additional Information

I’m currently trying to set up multiple Rocket.Chat deployments on one Kubernetes cluster. The idea I have is the following:

  • One Rocket.Chat deployment per customer per environment. So if I had two customers and three environments (dev, stage and prod), I would have six deployments in total.

I’m following the default setup to deploy Rocket.Chat via helm in a dev environment:

helm install stable/rocketchat --set mongodb.mongodbPassword=$(echo -n $(openssl rand -base64 32)),mongodb.mongodbRootPassword=$(echo -n $(openssl rand -base64 32))

When I do this, I’m able to port-forward rocket chat to my local host just fine. However, when I try to make another deployment, the rocket chat server goes into a crashloopbackoff with the following errors (I have tried different ports and the same ports):

/app/bundle/programs/server/node_modules/fibers/future.js:313
                                                throw(ex);
                                                ^

MongoNetworkError: failed to connect to server [fc-mongodb:27017] on first connect [MongoError: Authentication failed.]
    at Pool.<anonymous> (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/topologies/server.js:431:11)
    at emitOne (events.js:116:13)
    at Pool.emit (events.js:211:7)
    at connect (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/connection/pool.js:557:14)
    at callback (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/connection/connect.js:109:5)
    at provider.auth.err (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/connection/connect.js:352:21)
    at _authenticateSingleConnection (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/auth/auth_provider.js:66:11)
    at sendAuthCommand (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/auth/scram.js:215:18)
    at Connection.messageHandler (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/connection/connect.js:334:5)
    at emitTwo (events.js:126:13)
    at Connection.emit (events.js:214:7)
    at processMessage (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/connection/connection.js:364:10)
    at Socket.<anonymous> (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/connection/connection.js:533:15)
    at emitOne (events.js:116:13)
    at Socket.emit (events.js:211:7)
    at addChunk (_stream_readable.js:263:12)

and

/app/bundle/programs/server/node_modules/fibers/future.js:280
                                                throw(ex);
                                                ^

MongoParseError: Unescaped slash in userinfo section
    at parseConnectionString (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/uri_parser.js:538:21)
    at connect (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/operations/mongo_client_ops.js:195:3)
    at connectOp (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/operations/mongo_client_ops.js:284:3)
    at executeOperation (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/utils.js:416:24)
    at MongoClient.connect (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/mongo_client.js:175:10)
    at Function.MongoClient.connect (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/mongo_client.js:341:22)
    at new MongoConnection (packages/mongo/mongo_driver.js:177:11)
    at new MongoInternals.RemoteCollectionDriver (packages/mongo/remote_collection_driver.js:4:16)
    at Object.<anonymous> (packages/mongo/remote_collection_driver.js:38:10)
    at Object.defaultRemoteCollectionDriver (packages/underscore.js:784:19)
    at new Collection (packages/mongo/collection.js:97:40)
    at new AccountsCommon (packages/accounts-base/accounts_common.js:23:18)
    at new AccountsServer (packages/accounts-base/accounts_server.js:23:5)
    at packages/accounts-base/server_main.js:7:12
    at server_main.js (packages/accounts-base/server_main.js:19:1)
    at fileEvaluate (packages/modules-runtime.js:336:7)

I’m guessing this has something to do with the persistent volumes because those stick around even when I run helm delete. Would this work better if I created an external mongodb instance (one instance per environment) and several databases (one DB per customer) and connected to that instance and disabled the MongoDB helm chart?

UPDATE: I just discovered that when I make more than one Rocket Chat deployment, what is happening is that there is only one persistent volume and there is another persistent volume claim being made to compete with every other claim. What I don’t understand is why the helm chart only makes one persistent volume…

Hi @zdelfw Zach :

  • For any application in the world, RocketChat or not, the helm delete command will not ever delete any persistent volume and its pvc’.
  • Indeed, helm is designed (but not only) to run against production environments.
  • And I guarantee your boss will fire you, if you ever redeploy you RocketChat, and then go and then have to tell him all data of the last 2 years of the devops team has disappared.
  • For this reason, and because the helm does not want you to loose your job, they designed helm so that it never, ever deletes any data, whatever command you might run, and especially when you helm delete.

And this why your issue has nothing to do with the helm delete command not deleting persistent volumes.

Well, it actually is tied directly into this error. There are also issues of old service accounts laying around. But, I don’t expect Helm to delete either of those and have decided that its current functionality is for the best. :grin:

1 Like

Lol, And I will be so very glad to learn from you, now because I’m here for a RocketChat Issue, I am trying to deploy using its Helm Chart, And … Well you know about it. ping me anytime you want, should it be on github Helm Kubernetes Deploy RocketChat Error · Issue #17671 · RocketChat/Rocket.Chat · GitHub , if I can join a discussion where people are working with you on the subject, publicly, and sharing it for free to all.

Have a good week :slight_smile:

Regarding that GitHub issue, openssl did not work for me for making passwords. I used pwgen instead. Well, now I use terraform to make its passwords.