So we’ve been having an issue where file uploads are super slow and I found out why. Apparently the files are uploaded via many small sequential POST requests. For example a 100KiB file (dd’d from /dev/zero) takes four requests to upload! And a 6.05mb jpeg took about 160 requets to upload!! And each request takes around 300ms before the next one will be sent, causing the upload to take a stupidly long time. See this gif:

Why in the hell does it do this??! It’s incredibly inefficient and causes file uploads to be super slow, especially for those with high latency (my ping to the server is about 170ms as it’s on the other side of the planet).
Is there a way to increase the size of the chunks? It seems to upload in chunks of about 25KB, which is really small; increasing this would speed up uploads dramatically.

Server Setup Information

  • Version of Rocket.Chat Server: 1.3.2
  • Operating System: Debian 9 container on Ubuntu 18.04 host
  • Deployment Method: manual installation using systemd
  • Number of Running Instances: 1
  • DB Replicaset Oplog: wat
  • NodeJS Version: 8.16.1
  • MongoDB Version: 4.2.0
  • Proxy: nginx, cloudflare
  • Firewalls involved: none

Any additional Information

using default gridfs file store; i also tried FileSystem store but it uploaded at a similar speed, although i didn’t look at networking log until after reverting that.

Ok I found that uses this for uploads and the uploader can be configured with custom chunk sizes as well as disabling adaptive mode which will make chunks large enough to upload at 80% of bandwidth up to a certain chunk size limit, or whatever

And this is the only use of UploadFS.Uploader i’ve found in rocket chat codebase using github search and it doesn’t specify anything so looking in the code, the default chunk size is 32KiB and default max size for adaptive is 4K(i?)B (4 * 1024 * 1000). So I think I can just disable adaptive thing as it doesn’t seem to help at all and set the chunk size to whatever is the highest upload limit for the gateways (i.e. 1MB for nginx or 50MB for cloudflare) and it’ll use just one request if the file is smaller, or split it into chunks if larger, which is actually useful in this case.

I found it in the compiled app.js on the server but changing it and restarting everything didnt seem to change anything; I guess the client code is compiled elsewhere cuz trying to grep -r for it makes it explode with minified one-line code. So I guess I need to figure out how to build to modify this properly but that will have to be for another day.

hmm idk, I searched the build in vsc and ./programs/server/app/app.js has the only occurrence of it; the file that exploded grep was Only other occurrence is the module itself. I did a hard reload of the client to be sure but it’s still sending in small chunks. :confused:


Frontend code is minified at ./programs/web.browser/cc945290c4dafe28d2516121de8ec1f7c477a095.js and it’s r.Uploader; I’ll try adding the options there too.

Edit: whoop, the filename is different for each release, apparently latest is slightly newer now; but just edit the js file in ./programs/web.browser.

IT WORKED!! I edited the file in nano

(which was really hard for nano) and hard-reloaded the window (no need to restart server). Now files upload in 1MB chunks as I configured! See GIF:
Now it uploads faster although not as fast as I thought because now the server is slower to respond to larger chunks. I set it to 1MB because I read that’s the default limit for nginx, but I’m waiting for sysadmin to configure my vhost to allow 50MB to match cloudflare, and it should be better then.

Ok so I set it to 50mb now but I found out that maxChunkSize actually is used even if adaptive = false, despite what the documentation said (I opened a PR). So I also had to add maxChunkSize:-1 (cuz it’s not used if it’s less than 0, but it could also be set equal to or larger than chunkSize.)

Now uploads under 50mb are uploaded in one request, allowing maximum bandwidth and fastest upload speed. Only issue now is that the progress meter shows 0% the entire time.

Nice work. I vote this should be and issue or feature request.