Snap migration failed for 0.66.1


#1

If your suddenly doesn’t start after updating to 0.66.1 check the logs: sudo journalctl -u snap.rocketchat-server.rocketchat-server -n 100 and look for errors like some of the below:

| |
| ERROR! SERVER STOPPED |
| |
| Your database migration failed: |
| cannot use the part (settings of settings.preferences.groupByType) to traverse the element ({settings: null}) |
| |
| Please make sure you are running the latest version and try again. |
| If the problem persists, please contact support. |
| |
| This Rocket.Chat version: 0.66.1 |
| Database locked at version: 119 |
| Database target version: 129 |
| |
| Commit: fb5257f |
| Date: Wed Jul 4 15:05:11 2018 -0300 |
| Branch: HEAD |
| Tag: 0.66.1 |

or

other errors
+----------------------------------------------------------------------+
| |
| ERROR! SERVER STOPPED |
| |
| Your database migration failed: |
| Cannot read property 'desktopNotifications' of undefined |
| |
| Please make sure you are running the latest version and try again. |
| If the problem persists, please contact support. |
| |
| This Rocket.Chat version: 0.66.1 |
| Database locked at version: 126 |
| Database target version: 129 |
| |
| Commit: fb5257f |
| Date: Wed Jul 4 15:05:11 2018 -0300 |
| Branch: HEAD |
| Tag: 0.66.1 |
| |
+----------------------------------------------------------------------+

If you see something like this… please try:

sudo rocketchat-server.mongo
use parties
db.migrations.update({_id: 'control'}, {$set: {locked: false, version:125}})
# Lets run the failed migration
db.users.update({
	'settings.preferences.groupByType': { $exists: true }
}, {
	$rename: {
		'settings.preferences.groupByType': 'settings.preferences.sidebarGroupByType'
	}
}, {
	multi: true
});

Then exit the mongo shell and restart Rocket.Chat
sudo systemctl restart snap.rocketchat-server.rocketchat-server


If for some reason sudo rocketchat-server.mongo doesn’t work replace with: sudo snap run rocketchat-server.mongo


Related issue:


Exception in defer callback: RangeError: Maximum call stack size exceeded
RC 0.65.2 works but not any 0.66.x
Snap update to 66.1 broke server
Snap update to 66.1 broke server
#2

#3

“root@chat:~# mongo
MongoDB shell version v3.4.15
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.4.15
Server has startup warnings:
2018-07-06T23:16:46.670+0300 I STORAGE [initandlisten]
2018-07-06T23:16:46.670+0300 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2018-07-06T23:16:46.670+0300 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
2018-07-06T23:16:48.366+0300 I CONTROL [initandlisten]
2018-07-06T23:16:48.366+0300 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2018-07-06T23:16:48.366+0300 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
2018-07-06T23:16:48.366+0300 I CONTROL [initandlisten]
2018-07-06T23:16:48.367+0300 I CONTROL [initandlisten]
2018-07-06T23:16:48.368+0300 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine.
2018-07-06T23:16:48.368+0300 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems:
2018-07-06T23:16:48.368+0300 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options]
2018-07-06T23:16:48.368+0300 I CONTROL [initandlisten]
rs0:PRIMARY> show dbs
admin 0.000GB
chat 0.001GB
local 0.001GB
rocketchat 0.898GB
undefined 0.000GB
rs0:PRIMARY> use rocketchat
switched to db rocketchat
rs0:PRIMARY> db.migrations.update({_id: ‘control’}, {$set: {locked: false, version:125}})
WriteResult({ “nMatched” : 1, “nUpserted” : 0, “nModified” : 1 })
rs0:PRIMARY> db.users.update({
… ‘settings.preferences.groupByType’: { $exists: true }
… }, {
… $rename: {
… ‘settings.preferences.groupByType’: ‘settings.preferences.sidebarGroupByType’
… }
… }, {
… multi: true
… });
WriteResult({ “nMatched” : 0, “nUpserted” : 0, “nModified” : 0 })
rs0:PRIMARY> ^C
bye
root@chat:~# systemctl restart rocketchat”

This is what I’ve done after upgrating RC to 0.66.2. … Same old error!!
Reverting to 0.65.2 … worked again.

I’m stucked. Please help!!


#4

What exactly is your error?

Upgrading to 0.66.2 you should be able to do something like:

db.migrations.update({_id: ‘control’}, {$set: {locked: false, version:124}})

#5

rocketchat.access.log:
connect() failed (111 Connnection refused) while connecting to upstream

In browser I got:
502 Bad Gateway

I have to mention that I’m using ubuntu server 18.04 but not snap.
Thank you!


#6

That’s not nearly enough log to make out any details at all.

I can only guess from the vague message that it’s mongodb that isn’t started so Rocket.Chat might be unable to start.

With more logs might have a better idea


#7

Rocket.chat version 0.66.3

root@chat:~# npm -v
5.6.0

root@chat:~# uname -a
Linux chat 4.15.17-3-pve #1 SMP PVE 4.15.17-14 (Wed, 27 Jun 2018 17:18:05 +0200) x86_64 x86_64 x86_64 GNU/Linux

root@chat:~# ps ax

PID TTY STAT TIME COMMAND
1 ? Ss 0:00 /sbin/init
45 ? S<s 0:00 /lib/systemd/systemd-journald
142 ? Ss 0:00 /lib/systemd/systemd-resolved
144 ? Ssl 0:00 /usr/sbin/rsyslogd -n
145 ? Ssl 0:00 /usr/bin/python3 /usr/bin/networkd-dispatcher --run-startup-triggers
146 ? Ss 0:00 /lib/systemd/systemd-logind
147 ? Ss 0:00 /usr/sbin/cron -f
148 ? Ssl 0:00 /usr/lib/udisks2/udisksd
149 ? Ssl 0:00 /usr/lib/snapd/snapd
150 ? Ssl 0:00 /usr/lib/accountsservice/accounts-daemon
151 ? Ss 0:00 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
230 ? Ssl 0:04 /usr/bin/mongod --config /etc/mongod.conf
231 ? Ss 0:00 php-fpm: master process (/etc/php/7.2/fpm/php-fpm.conf)
237 ? Ssl 0:00 /usr/bin/python3 /usr/bin/fail2ban-server -xf start
247 console Ss+ 0:00 /sbin/agetty -o -p – \u --noclear --keep-baud console 115200,38400,9600 linux
248 pts/0 Ss 0:00 /bin/login -p –
263 ? Ssl 0:00 /usr/lib/policykit-1/polkitd --no-debug
311 ? Ss 0:00 /usr/sbin/sshd -D
417 ? S 0:00 php-fpm: pool www
418 ? S 0:00 php-fpm: pool www
425 ? Ss 0:00 /usr/lib/postfix/sbin/master -w
426 ? S 0:00 pickup -l -t unix -u -c
427 ? S 0:00 qmgr -l -t unix -u
430 ? Ss 0:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
431 ? S 0:00 nginx: worker process
432 ? S 0:00 nginx: worker process
433 ? S 0:00 nginx: worker process
434 ? S 0:00 nginx: worker process
435 ? S 0:00 nginx: worker process
436 ? S 0:00 nginx: worker process
437 ? S 0:00 nginx: worker process
438 ? S 0:00 nginx: worker process
439 ? S 0:00 nginx: worker process
440 ? S 0:00 nginx: worker process
441 ? S 0:00 nginx: worker process
442 ? S 0:00 nginx: worker process
443 ? S 0:00 nginx: worker process
444 ? S 0:00 nginx: worker process
445 ? S 0:00 nginx: worker process
446 ? S 0:00 nginx: worker process
496 ? Ssl 0:00 /usr/local/bin/node /usr/local/lib/node_modules/forever/bin/monitor main.js
562 ? Ss 0:00 /usr/bin/perl /usr/share/webmin/miniserv.pl /etc/webmin/miniserv.conf
573 ? Sl 0:12 /usr/local/bin/node /var/www/Rocket.Chat/main.js
703 ? Ss 0:00 /lib/systemd/systemd --user
704 ? S 0:00 (sd-pam)
715 pts/0 S 0:00 -bash
1078 pts/0 R+ 0:00 ps ax

etc/nginx/sites-available/rocket-chat

2018/07/10 14:11:29 [error] 431#431: *54 connect() failed (111: Connection refused) while connecting to upstream, client: 93.114.74.2, server: chat.proquality.ro, request: “GET /_timesync HTTP/1.1”, $

etc/nginx/sites-available/rocket-chat

upstream backend {
server 127.0.0.1:3000;
}

server {
listen 80;
server_name chat.xxx.com;
return 301 https://$server_name$request_uri;
}

server {
listen 127.0.0.1;
listen 443;
server_name chat.xxx.com;

error_log /var/log/nginx/rocketchat.access.log;

ssl on;
ssl_certificate /etc/letsencrypt/live/chat.xxx.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/chat.xxx.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # dont use SSLv3 ref: POODLE
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !P$

ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_dhparam /etc//nginx/ssl/dhparam.pem;

location / {
    proxy_pass http://backend/;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forward-Proto https;
    proxy_set_header X-Nginx-Proxy true;
    proxy_redirect off;

}
}


#8

/etc/nginx/sites-available/rocket-chat

2018/07/10 14:11:29 [error] 431#431: *54 connect() failed (111: Connection refused) while connecting to upstream, client: 1.2.3.4, server: chat.xxx.com, request: “GET /_timesync HTTP/1.1”, $

/var/log/mongodb/mongodb.log

2018-07-10T14:44:15.684+0300 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
2018-07-10T14:44:15.684+0300 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets…
2018-07-10T14:44:15.684+0300 I NETWORK [signalProcessingThread] closing listening socket: 7
2018-07-10T14:44:15.684+0300 I NETWORK [signalProcessingThread] closing listening socket: 8
2018-07-10T14:44:15.684+0300 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock
2018-07-10T14:44:15.684+0300 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog…
2018-07-10T14:44:15.684+0300 I REPL [signalProcessingThread] shutting down replication subsystems
2018-07-10T14:44:15.684+0300 I REPL [signalProcessingThread] Stopping replication reporter thread
2018-07-10T14:44:15.684+0300 I REPL [signalProcessingThread] Stopping replication fetcher thread
2018-07-10T14:44:15.685+0300 I REPL [signalProcessingThread] Stopping replication applier thread
2018-07-10T14:44:15.913+0300 I REPL [signalProcessingThread] Stopping replication storage threads
2018-07-10T14:44:15.940+0300 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture
2018-07-10T14:44:15.944+0300 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down
2018-07-10T14:44:16.056+0300 I STORAGE [signalProcessingThread] shutdown: removing fs lock…
2018-07-10T14:44:16.057+0300 I CONTROL [signalProcessingThread] now exiting
2018-07-10T14:44:16.057+0300 I CONTROL [signalProcessingThread] shutting down with code:0
2018-07-10T14:44:29.350+0300 I CONTROL [main] ***** SERVER RESTARTED *****
2018-07-10T14:44:29.429+0300 I CONTROL [initandlisten] MongoDB starting : pid=220 port=27017 dbpath=/var/lib/mongodb 64-bit host=chat
2018-07-10T14:44:29.429+0300 I CONTROL [initandlisten] db version v3.4.15
2018-07-10T14:44:29.429+0300 I CONTROL [initandlisten] git version: 52e5b5fbaa3a2a5b1a217f5e647b5061817475f9
2018-07-10T14:44:29.429+0300 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2n 7 Dec 2017
2018-07-10T14:44:29.429+0300 I CONTROL [initandlisten] allocator: tcmalloc
2018-07-10T14:44:29.429+0300 I CONTROL [initandlisten] modules: none
2018-07-10T14:44:29.429+0300 I CONTROL [initandlisten] build environment:
2018-07-10T14:44:29.429+0300 I CONTROL [initandlisten] distmod: ubuntu1604
2018-07-10T14:44:29.429+0300 I CONTROL [initandlisten] distarch: x86_64
2018-07-10T14:44:29.429+0300 I CONTROL [initandlisten] target_arch: x86_64
2018-07-10T14:44:29.429+0300 I CONTROL [initandlisten] options: { config: “/etc/mongod.conf”, net: { port: 27017 }, replication: { oplogSizeMB: 1, replSetName: “rs0” }, storage: { dbPath: “/var/lib/mongodb”, journal: { enabled: true } }, systemLog: { destination: “file”, logAppend: true, path: “/var/log/mongodb/mongod.log” } }
2018-07-10T14:44:29.502+0300 I - [initandlisten] Detected data files in /var/lib/mongodb created by the ‘wiredTiger’ storage engine, so setting the active storage engine to ‘wiredTiger’.
2018-07-10T14:44:29.503+0300 I STORAGE [initandlisten]
2018-07-10T14:44:29.503+0300 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2018-07-10T14:44:29.503+0300 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
2018-07-10T14:44:29.503+0300 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=1244M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),verbose=(recovery_progress),
2018-07-10T14:44:30.060+0300 I STORAGE [initandlisten] WiredTiger message [1531223070:60366][220:0x7f8083237180], txn-recover: Main recovery loop: starting at 309/397184
2018-07-10T14:44:30.172+0300 I STORAGE [initandlisten] WiredTiger message [1531223070:172537][220:0x7f8083237180], txn-recover: Recovering log 309 through 310
2018-07-10T14:44:30.241+0300 I STORAGE [initandlisten] WiredTiger message [1531223070:241312][220:0x7f8083237180], txn-recover: Recovering log 310 through 310
2018-07-10T14:44:30.627+0300 I STORAGE [initandlisten] Starting WiredTigerRecordStoreThread local.oplog.rs
2018-07-10T14:44:30.627+0300 I STORAGE [initandlisten] The size storer reports that the oplog contains 6573 records totaling to 1092269 bytes
2018-07-10T14:44:30.631+0300 I STORAGE [initandlisten] Sampling from the oplog between Jul 9 22:38:01:1 and Jul 10 14:44:13:1 to determine where to place markers for truncation
2018-07-10T14:44:30.631+0300 I STORAGE [initandlisten] Taking 104 samples and assuming that each section of oplog contains approximately 632 records totaling to 105022 bytes
2018-07-10T14:44:30.714+0300 I STORAGE [initandlisten] Placing a marker at optime Jul 10 00:11:02:1
2018-07-10T14:44:30.714+0300 I STORAGE [initandlisten] Placing a marker at optime Jul 10 01:45:00:1
2018-07-10T14:44:30.714+0300 I STORAGE [initandlisten] Placing a marker at optime Jul 10 03:07:23:1
2018-07-10T14:44:30.714+0300 I STORAGE [initandlisten] Placing a marker at optime Jul 10 05:05:23:1
2018-07-10T14:44:30.714+0300 I STORAGE [initandlisten] Placing a marker at optime Jul 10 06:39:54:1
2018-07-10T14:44:30.714+0300 I STORAGE [initandlisten] Placing a marker at optime Jul 10 08:48:34:1
2018-07-10T14:44:30.714+0300 I STORAGE [initandlisten] Placing a marker at optime Jul 10 10:06:45:1
2018-07-10T14:44:30.714+0300 I STORAGE [initandlisten] Placing a marker at optime Jul 10 11:55:45:1
2018-07-10T14:44:30.714+0300 I STORAGE [initandlisten] Placing a marker at optime Jul 10 13:19:16:1
2018-07-10T14:44:30.714+0300 I STORAGE [initandlisten] Placing a marker at optime Jul 10 14:03:43:a6
2018-07-10T14:44:32.294+0300 I CONTROL [initandlisten]
2018-07-10T14:44:32.294+0300 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2018-07-10T14:44:32.294+0300 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
2018-07-10T14:44:32.294+0300 I CONTROL [initandlisten]
2018-07-10T14:44:32.296+0300 I CONTROL [initandlisten]
2018-07-10T14:44:32.296+0300 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine.
2018-07-10T14:44:32.296+0300 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems:
2018-07-10T14:44:32.296+0300 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options]
2018-07-10T14:44:32.296+0300 I CONTROL [initandlisten]
2018-07-10T14:44:32.672+0300 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory ‘/var/lib/mongodb/diagnostic.data’
2018-07-10T14:44:32.674+0300 I REPL [replExecDBWorker-2] New replica set config in use: { _id: “rs0”, version: 1, protocolVersion: 1, members: [ { _id: 0, host: “chat:27017”, arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: 2000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId(‘58a476efd7b4c08371c9405d’) } }
2018-07-10T14:44:32.674+0300 I REPL [replExecDBWorker-2] This node is chat:27017 in the config
2018-07-10T14:44:32.674+0300 I REPL [replExecDBWorker-2] transition to STARTUP2
2018-07-10T14:44:32.674+0300 I REPL [replExecDBWorker-2] Starting replication storage threads
2018-07-10T14:44:32.675+0300 I REPL [replExecDBWorker-2] Starting replication fetcher thread
2018-07-10T14:44:32.675+0300 I REPL [replExecDBWorker-2] Starting replication applier thread
2018-07-10T14:44:32.675+0300 I REPL [replExecDBWorker-2] Starting replication reporter thread
2018-07-10T14:44:32.675+0300 I REPL [rsSync] transition to RECOVERING
2018-07-10T14:44:32.675+0300 I REPL [rsSync] transition to SECONDARY
2018-07-10T14:44:32.676+0300 I REPL [rsSync] conducting a dry run election to see if we could be elected. current term: 266
2018-07-10T14:44:32.676+0300 I REPL [ReplicationExecutor] dry election run succeeded, running for election in term 267
2018-07-10T14:44:32.676+0300 I REPL [ReplicationExecutor] election succeeded, assuming primary role in term 267
2018-07-10T14:44:32.676+0300 I REPL [ReplicationExecutor] transition to PRIMARY
2018-07-10T14:44:32.676+0300 I REPL [ReplicationExecutor] Entering primary catch-up mode.
2018-07-10T14:44:32.676+0300 I REPL [ReplicationExecutor] Exited primary catch-up mode.
2018-07-10T14:44:32.677+0300 I NETWORK [thread1] waiting for connections on port 27017
2018-07-10T14:44:33.677+0300 I REPL [rsSync] transition to primary complete; database writes are now permitted
2018-07-10T14:44:36.652+0300 I NETWORK [thread1] connection accepted from 1.2.3.5:55938 #1 (1 connection now open)
2018-07-10T14:44:36.658+0300 I NETWORK [conn1] received client metadata from 1.2.3.5:55938 conn1: { driver: { name: “nodejs”, version: “2.2.34” }, os: { type: “Linux”, name: “linux”, architecture: “x64”, version: “4.15.17-3-pve” }, platform: “Node.js v8.11.3, LE, mongodb-core: 2.1.18” }
2018-07-10T14:44:36.711+0300 I NETWORK [thread1] connection accepted from 127.0.0.1:39724 #2 (2 connections now open)
2018-07-10T14:44:36.712+0300 I NETWORK [conn2] received client metadata from 127.0.0.1:39724 conn2: { driver: { name: “nodejs”, version: “2.2.34” }, os: { type: “Linux”, name: “linux”, architecture: “x64”, version: “4.15.17-3-pve” }, platform: “Node.js v8.11.3, LE, mongodb-core: 2.1.18” }
2018-07-10T14:44:36.713+0300 I NETWORK [thread1] connection accepted from 127.0.0.1:39726 #3 (3 connections now open)
2018-07-10T14:44:36.714+0300 I NETWORK [conn3] received client metadata from 127.0.0.1:39726 conn3: { driver: { name: “nodejs”, version: “2.2.34” }, os: { type: “Linux”, name: “linux”, architecture: “x64”, version: “4.15.17-3-pve” }, platform: “Node.js v8.11.3, LE, mongodb-core: 2.1.18” }
2018-07-10T14:44:42.350+0300 I NETWORK [thread1] connection accepted from 1.2.3.5:55946 #4 (4 connections now open)


#9

That’s every log except for the one I actually need. :slight_smile:

I need the log from Rocket.Chat its self. Gotta start at application layer and work our way up.