Problem configuring k8s for webhooks and presence statuses

Description

Hi, I have been deployed RC 4.5.6 with k8s and got some issues.

First: I have a problem with integration via web hooks with a slack-bridge module.
I configure a slack-bridge for 2way communication with files and Emoji.

When a Slack send a message to RC channel its real send a number of message equals a number of pods into k8s. But RC is clever. If I understand clear, RC simply rewrite messages into channel and really users see only one message, but a number of notification is 10. I have 10 pods in k8s.

I think, the problem is in my configuration of deployment. I use ingress based on nginx. I think when Slack send message to RC via webhook, ingress controllers send API request to each of my pods.

Is anybody can suggest what is the right configuration of ingress for load balance request only for one pod? Or, may be, my idea is incorrect and the problem in another place?

Second, Does anybody know how RC keeps a presence statues. I have another problems with update presence statuses between pods.
For example, User 1 has session on pod1 and user 2 has session on pod2. If user1 change presence status then user2 does not see update of it. But if user2 make manual update catch, like CTRL+R, after that user2 see correct presence status of user1.

At the same time, if user1 and user2 have session on a pod1, the update of presences between users works correctly. I think the problem of sync of presence statuses in individual storage for each pods. But I cannot find a place of configuration of it.
I had idea that that place should be a mongodb, but I guess I will the same problem if I deploy 5 instances of RC in docker container via docker-compose method.

For keeping files and uploads I use cloud storage like Amazon S3

Will be grateful for any ideas.

Server Setup Information

  • Version of Rocket.Chat Server: 4.5.6
  • Operating System: Ubuntu 16
  • Deployment Method: k8s 1.19
  • Number of Running Instances: 10
  • DB Replicaset Oplog: NA
  • NodeJS Version: 16.14.0
  • MongoDB Version: mongoldb cluster 4.4
  • Proxy: k8s ingress
  • Firewalls involved: no

Any additional Information