Login-Form not showing up in different browsers

Description

Hey Guys,

yesterday I set up an instance of Rocket.Chat via Docker-Compose. I tried so many things, that it would run through. After a few hours, I got it through the whole Installation and up and running.
But: Now I have the following problem:

I could create an admin-user at first setup and login to this account.
Now, when I open up the website, it just shows three dots animating and then there is no Login-Field. It’s just a blank grey page on url/home. I tried this on Opera, MS Edge, Firefox, Chrome, Safari. Same Problem everywhere… No Login-Form. I tried it on the IP:Port as well with subdomain.domain.de - no chance. It seems there is a problem with “too many requests”…

Why is this? How can I make it work, that the Login-Form comes up everytime I enter the page?
Please help :slight_smile: Thank you!

Server Setup Information

  • Version of Rocket.Chat Server: 4.1.2
  • Operating System: Debian 11 Bullseye
  • Deployment Method: docker-compose
  • Number of Running Instances: 1
  • DB Replicaset Oplog: Enabled
  • NodeJS Version: 12.22.1
  • MongoDB Version: 4.2.17
  • Proxy: Nginx Proxy Manager
  • Firewalls involved: ???

Any additional Information

My Setup is:

  • Debian 11 Bullseye
  • Docker
  • Docker-Compose
  • Portainer
  • Nginx Proxy Manager

Here is my docker-compose.yml, where I had to change some stuff, while not using traefik (Nginx Proxy Manager) and troubleshooting deprecated mongoDB-versions. I also changed to port 2000, as I have some tools running, and 3000 is already in use:

version: '2'

services:
  rocketchat:
    image: registry.rocket.chat/rocketchat/rocket.chat:latest
    command: >
      bash -c
        "for i in `seq 1 30`; do
          node main.js &&
          s=$$? && break || s=$$?;
          echo \"Tried $$i times. Waiting 5 secs...\";
          sleep 5;
        done; (exit $$s)"
    restart: unless-stopped
    volumes:
      - ./uploads:/app/uploads
    environment:
      - PORT=2000
      - ROOT_URL=http://subdomain.domain.de:2000
      - MONGO_URL=mongodb://rocketchat_mongo_1:27017/rocketchat
      - MONGO_OPLOG_URL=mongodb://rocketchat_mongo_1:27017/local
#       - MAIL_URL=smtp://smtp.email
#       - HTTP_PROXY=http://proxy.domain.com
#       - HTTPS_PROXY=http://proxy.domain.com
    depends_on:
      - mongo
    ports:
      - 2000:2000
    labels:
      - "traefik.backend=rocketchat"
      - "traefik.frontend.rule=Host: your.domain.tld"

  mongo:
    image: mongo:4.2
    restart: unless-stopped
    volumes:
     - ./data/db:/data/db
     - ./data/dump:/dump
    command: mongod --oplogSize 128 --replSet rs0 --storageEngine=wiredTiger
    labels:
      - "traefik.enable=false"

  # this container's job is just run the command to initialize the replica set.
  # it will run the command and remove himself (it will not stay running)
  mongo-init-replica:
    image: mongo:4.2
    command: >
      bash -c
        "for i in `seq 1 30`; do
          mongo mongo/rocketchat --eval \"
            rs.initiate({
              _id: 'rs0',
              members: [ { _id: 0, host: 'localhost:27017' } ]})\" &&
          s=$$? && break || s=$$?;
          echo \"Tried $$i times. Waiting 5 secs...\";
          sleep 5;
        done; (exit $$s)"
    depends_on:
      - mongo

  # hubot, the popular chatbot (add the bot user first and change the password before starting this image)
#  hubot:
#    image: rocketchat/hubot-rocketchat:latest
#    restart: unless-stopped
#    environment:
#      - ROCKETCHAT_URL=rocketchat:2000
#      - ROCKETCHAT_ROOM=GENERAL
#      - ROCKETCHAT_USER=bot
#      - ROCKETCHAT_PASSWORD=botpassword
#      - BOT_NAME=bot
  # you can add more scripts as you'd like here, they need to be installable by npm
#      - EXTERNAL_SCRIPTS=hubot-help,hubot-seen,hubot-links,hubot-diagnostics
#    depends_on:
#      - rocketchat
#    labels:
#      - "traefik.enable=false"
#    volumes:
#      - ./scripts:/home/hubot/scripts
  # this is used to expose the hubot port for notifications on the host on port 3001, e.g. for hubot-jenkins-notifier
#    ports:
#      - 2001:8080

  #traefik:
  #  image: traefik:latest
  #  restart: unless-stopped
  #  command: >
  #    traefik
  #     --docker
  #     --acme=true
  #     --acme.domains='your.domain.tld'
  #     --acme.email='your@email.tld'
  #     --acme.entrypoint=https
  #     --acme.storagefile=acme.json
  #     --defaultentrypoints=http
  #     --defaultentrypoints=https
  #     --entryPoints='Name:http Address::80 Redirect.EntryPoint:https'
  #     --entryPoints='Name:https Address::443 TLS.Certificates:'
  #  ports:
  #    - 80:80
  #    - 443:443
  #  volumes:
  #    - /var/run/docker.sock:/var/run/docker.sock

Inside Portainer I just had a look into the up and running container-logs:

Here is the Log for the rocketchat-container:

LocalStore: store created at 

LocalStore: store created at 

LocalStore: store created at 

{"level":51,"time":"2021-11-10T17:40:12.132Z","pid":9,"hostname":"a2a91d0441b8","name":"Migrations","msg":"Not migrating, already at version 243"}

ufs: temp directory created at "/tmp/ufs"

+--------------------------------------------------+

|                  SERVER RUNNING                  |

+--------------------------------------------------+

|                                                  |

|  Rocket.Chat Version: 4.1.2                      |

|       NodeJS Version: 12.22.1 - x64              |

|      MongoDB Version: 4.2.17                     |

|       MongoDB Engine: wiredTiger                 |

|             Platform: linux                      |

|         Process Port: 2000                       |

|             Site URL: http://IP:2000  |

|     ReplicaSet OpLog: Enabled                    |

|          Commit Hash: 2aba8d2d82                 |

|        Commit Branch: HEAD                       |

|                                                  |

+--------------------------------------------------+

(node:9) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and usability issues. Please use the Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() methods instead.


:50,"time":"2021-11-10T17:54:07.318Z","pid":9,"hostname":"a2a91d0441b8","name":"System","msg":"Exception while invoking method userSetUtcOffset 'Error, too many requests. Please slow down. You must wait 39 seconds before trying again. [too-many-requests]'"}


:50,"time":"2021-11-10T17:54:17.410Z","pid":9,"hostname":"a2a91d0441b8","name":"System","msg":"Exception while invoking method userSetUtcOffset 'Error, too many requests. Please slow down. You must wait 60 seconds before trying again. [too-many-requests]'"}

{"level":50,"time":"2021-11-10T17:54:29.150Z","pid":9,"hostname":"a2a91d0441b8","name":"System","msg":"Exception while invoking method autoTranslate.getSupportedLanguages 'Error, too many requests. Please slow down. You must wait 48 seconds before trying again. [too-many-requests]'"}

{"level":50,"time":"2021-11-10T17:58:47.484Z","pid":9,"hostname":"a2a91d0441b8","name":"System","msg":"Exception while invoking method apps/go-enable 'TOTP Required [totp-required]'"}

And here is the Log from the mongodb-container:

2021-11-11T08:14:52.182+0000 I  COMMAND  [LogicalSessionCacheRefresh] command config.$cmd command: update { update: "system.sessions", ordered: false, allowImplicitCollectionCreation: false, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } numYields:0 reslen:245 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 17 } }, ReplicationStateTransition: { acquireCount: { w: 17 } }, Global: { acquireCount: { w: 17 } }, Database: { acquireCount: { w: 17 } }, Collection: { acquireCount: { w: 17 } }, Mutex: { acquireCount: { r: 34 } } } flowControl:{ acquireCount: 17, timeAcquiringMicros: 6 } storage:{} protocol:op_msg 237ms

2021-11-11T08:24:52.142+0000 I  COMMAND  [LogicalSessionCacheRefresh] command config.$cmd command: update { update: "system.sessions", ordered: false, allowImplicitCollectionCreation: false, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } numYields:0 reslen:245 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 17 } }, ReplicationStateTransition: { acquireCount: { w: 17 } }, Global: { acquireCount: { w: 17 } }, Database: { acquireCount: { w: 17 } }, Collection: { acquireCount: { w: 17 } }, Mutex: { acquireCount: { r: 34 } } } flowControl:{ acquireCount: 17, timeAcquiringMicros: 12 } storage:{} protocol:op_msg 197ms

2021-11-11T08:54:52.149+0000 I  COMMAND  [LogicalSessionCacheRefresh] command config.$cmd command: update { update: "system.sessions", ordered: false, allowImplicitCollectionCreation: false, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } numYields:0 reslen:245 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 17 } }, ReplicationStateTransition: { acquireCount: { w: 17 } }, Global: { acquireCount: { w: 17 } }, Database: { acquireCount: { w: 17 } }, Collection: { acquireCount: { w: 17 } }, Mutex: { acquireCount: { r: 34 } } } flowControl:{ acquireCount: 17, timeAcquiringMicros: 8 } storage:{} protocol:op_msg 204ms

2021-11-11T09:14:52.389+0000 I  COMMAND  [LogicalSessionCacheRefresh] command config.$cmd command: update { update: "system.sessions", ordered: false, allowImplicitCollectionCreation: false, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } numYields:0 reslen:245 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 17 } }, ReplicationStateTransition: { acquireCount: { w: 17 } }, Global: { acquireCount: { w: 17 } }, Database: { acquireCount: { w: 17 } }, Collection: { acquireCount: { w: 17 } }, Mutex: { acquireCount: { r: 34 } } } flowControl:{ acquireCount: 17, timeAcquiringMicros: 16 } storage:{} protocol:op_msg 443ms

2021-11-11T10:09:52.049+0000 I  COMMAND  [LogicalSessionCacheRefresh] command config.$cmd command: update { update: "system.sessions", ordered: false, allowImplicitCollectionCreation: false, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } numYields:0 reslen:245 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 17 } }, ReplicationStateTransition: { acquireCount: { w: 17 } }, Global: { acquireCount: { w: 17 } }, Database: { acquireCount: { w: 17 } }, Collection: { acquireCount: { w: 17 } }, Mutex: { acquireCount: { r: 34 } } } flowControl:{ acquireCount: 17, timeAcquiringMicros: 9 } storage:{} protocol:op_msg 104ms

2021-11-11T10:19:52.051+0000 I  COMMAND  [LogicalSessionCacheRefresh] command config.$cmd command: update { update: "system.sessions", ordered: false, allowImplicitCollectionCreation: false, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } numYields:0 reslen:548 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 20 } }, ReplicationStateTransition: { acquireCount: { w: 20 } }, Global: { acquireCount: { w: 20 } }, Database: { acquireCount: { w: 20 } }, Collection: { acquireCount: { w: 20 } }, Mutex: { acquireCount: { r: 40 } } } flowControl:{ acquireCount: 20, timeAcquiringMicros: 10 } storage:{} protocol:op_msg 105ms

Hi!

I have seen some similar issue when the ROOT_URL settings (Admin > General > Site URL) is different from what is accessible.

Are you using some kind of reverse proxy and using the domain?

Hi, thanks for your reply @dudanogueira !

I’m using nginx Proxy Manager (npm) as a reverse proxy. Which works and sends me to Rocketchat.
In the docker-compose.yml I define ROOT_URL to http://subdomain.domain.de:2000 - should this be https?
I can’t login anymore, because if the Login-Fields come up and I enter my credentials it redirects to /home it pops up a millisecond something and it becomes grey. No chance to enter the admin panel…

Edit:

I could login via IP and change the URL in the admin settings to https: and the subdomain. The problem persists… :confused:

Edit 2:

After I could log in I disabled all Rate Limits, and now it works. So this is a Limitation problem… weird.

probably…

it got on a infinite redirect loop and consumed your api rate limit.

If you are using the nginx as a reverse proxy, I can assume you are redirecting the traffic from subomain.domain.de to the port inside your network.

If that’s the case, you shouldn’t need the :2000

here we have some docs about how to do that:

So, were you able to solve the issue after all?