Slow client load time, an analysis

I am noticing that my boot time to Rocket Chat (hosted by you) from Australia is really slow.

Here are some notes on what I am noticing on reload.

On my desktop I am seeing about 6 seconds or so on reload into our #general channel.

I am only noticing 2 or so seconds of JS execution and evaluation.

There is a huge “gap” between the first evaluation (that results in a white screen) to the second evaluation that brings up the content. During the big gap no CPU work appears to be happening, it is all on the network.

In the Network Tab I see a huge number of websocket frames, worst of which is a 157K frame.

Since Web Sockets do not carry any compression AND there are a huge amount of frames sent the huge gap is likely either a result of just too much uncompressed data being sent combined with a bunch of round trips.

Additionally, a bunch of sound assets that should not have been delivered till way later and should be cached properly but are not:

Recommendations

  1. Given the large duration here I would recommend a loading screen like slack has, cause the white screen is not great

  2. Amend it so the initial data payload is delivered with the initial request as opposed to trickled through via web sockets

  3. Do a full asset audit, confirming you are delivering assets people really need on initial page load and that they are cached correctly.

Happy to open multiple issues on GitHub if you wish.

3 Likes

@sam thanks for the awesome detailed break down.

I know for our hosting specifically we are going to be adding a CDN in front to speed up asset delivery.

I like the idea of showing a loading screen to improve UX. Especially if it’s taking longer to load required assets

Just wanted to follow up after having got back from Holiday to give a more proper look at this.

This particular frame seems to be a response back with all of the settings.

In the webapp pretty much every single thing is loaded over a websocket. This can lead to great real time communication. But also has the downside of everything coming over a single pipe.

Out of curiosity what is your latency between you there in Australia and your server?

This is definitely a good suggestion. On our Rocket.Chat+ mobile applications we’ve started using more of our rest api’s for increased compression and we hope it will help on high latency situations like 3G, but obviously distant servers it would also potentially help with.

Thoughts?

Ok this one is kind of a no-brainer for us to fix. https://github.com/RocketChat/Rocket.Chat/search?utf8=✓&q=cache-control&type= We’re hard coding these… We should at the very least allow this to be controlled. I honestly don’t imagine people re-uploading the same sound file, emoji etc over and over again. So caching this seems like a no-brainer.

I’ve opened an issue: Cache sound/emoji assets · Issue #9313 · RocketChat/Rocket.Chat · GitHub

Always happy to pick minds :smile:

1 Like

My ping time is 208ms

I think it would help enormously, once you do this refactor I would be super happy to profile again, just let me know.

I’ve changed the title to make it more clear that it’s client side analysis not server side.