Rocket chat on Ubuntu 20.04 no longer works

I’ve had a rocket chat server setup for several weeks on Ubuntu 20.04 using snaps. Out of the blue it seems it no longer works.
When checking services the snap.rocketchat-server won’t start, it just stops if I issue it a start command.
I’ve tried restoring the server from a previous day when I know it worked, and same thing happens.
Did something get pushed from some where that is breaking my server?

Same here, it basically seems to hang after the following message. The local port 3000 is never opened, and therefore we get a Bad Gateway error:

Aug 13 15:09:02 prod-rocketchat1 rocketchat-server.rocketchat-server[10033]: {“line”:“120”,“file”:“migrations.js”,“message”:“Migrations: Not migrating, already at version 202”,“time”:{"$date":1597324142063},“level”:“info”}

Any idea what happened? I guess I’m confused how my own server would break all of a sudden.

I got the server to start again by blocking as follows:
iptables -I OUTPUT -d -j REJECT

Not necessarily recommending this as a solution - it will break the push gateway and possibly your marketplace apps, but for us it was the only way to even get our server to start.

Then did you just: sudo systemctl start snap.rocketchat-server.rocketchat-server.service ?

For me, it still does nothing.

I went with snap restart rocketchat-server, but your way should work too. Maybe you take a different route? In my case I waited for the server to get to the place where it hangs, and then I ran the following:

lsof -p $(pgrep -f 'node /snap') | grep IPv4

I would see something like this:

node    13593 root   18u     IPv4              69496      0t0    TCP localhost:35120->localhost:27017 (ESTABLISHED)
node    13593 root   19u     IPv4              69497      0t0    TCP localhost:35122->localhost:27017 (ESTABLISHED)
node    13593 root   20u     IPv4              69498      0t0    TCP localhost:35124->localhost:27017 (ESTABLISHED)
node    13593 root   21u     IPv4              70390      0t0    TCP localhost:35126->localhost:27017 (ESTABLISHED)
node    13593 root   22u     IPv4              71948      0t0    TCP localhost:35128->localhost:27017 (ESTABLISHED)
node    13593 root   23u     IPv4              72049      0t0    TCP> (ESTABLISHED)

Then I just went ahead and got the address of using host, and then blocked the /24 subnet (this might be too much, but I didn’t bother looking at it further).

Hmm interesting, not working on my side.
I was wondering if it was something to do with caddy. When I setup my server I had to edit the /etc/hosts file and create an entry to point the dns name to my public IP because for some stupid reason they want this to match internally. Sorry Snap/RocketChat, but I have an INTERNAL DNS SERVER!!!
When finished, I reverted the /etc/hosts file back to it’s original form… so I was guessing rocketchat/snap was screwing up because of this weeks later after setup was completed.

This option for REJECTING the IP address worked for me as well. Looks like issues connecting to I can’t sign in to my profile there neither to access server workspace.

SNAP auto updated to 3.5.1 last night, introducing the Push Gateway feature that will now tracked\bill your RC workspace instance for Push Notifications. I’m sure there must be a bug with deployment

I’m going to delete the host and redeploy without snaps. Snap is an effing joke.
Scratch that, I’m moving to Mattermost…

My rocketchat instance also crashed out of blue without me doing anything (no scheduled updates, no nothing) and doesn’t work anymore. I must say is not reliable enough for production, my server is down whole day and there’s no fix in sight.

Same here. I’m no longer able to get the service running out of the blue. Also, notifications are no longer working. The server has not been touched (it’s only used by myself and 2 kids). I was able to restore an old version, but I can’t seem to disable snap from auto refreshing!

I have Mattermost running on another server at home and was trial testing Rocket.Chat to see if I want to use that.

Sorry to be so vague, but we really don’t know what caused the issue and are looking into it.

Still not working from my side. I restored my VM from a backup of 2 days ago when I know it was working, powered on, and still the same “Bad Gateway” error from nginx and port 3000 never opens locally on the host.

Same here, it essentially appears to hang after the accompanying message. The nearby port 3000 is rarely opened, and subsequently we get a Bad Gateway blunder: