Slow download big static files from nginx – Managing your servers can streamline the performance of your team by allowing them to complete complex tasks faster. Plus, it can enable them to detect problems early on before they get out of hand and compromise your business. As a result, the risk of experiencing operational setbacks is drastically lower.
But the only way to make the most of your server management is to perform it correctly. And to help you do so, this article will share nine tips on improving your server management and fix some problem about linux, networking, nginx, debian, vmware-esxi.
I’m using debian 7 x64 in vmware-esxi virtualization.
Max download per client is 1mb/s and nginx no use more than 50mbps together and my question is what may cause so slow transfers?
**Settings for eth1: Supported ports: [ TP ] Supported link modes: 1000baseT/Full 10000baseT/Full** root@www:~# iostat Linux 3.2.0-4-amd64 (www) 09.02.2015 _x86_64_ (4 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 1,75 0,00 0,76 0,64 0,00 96,84 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn sda 173,93 1736,11 219,06 354600 44744 root@www:~# free -m total used free shared buffers cached Mem: 12048 1047 11000 0 106 442 -/+ buffers/cache: 498 11549 Swap: 713 0 713
# Increase system IP port limits to allow for more connections net.ipv4.ip_local_port_range = 2000 65000 net.ipv4.tcp_window_scaling = 1 # number of packets to keep in backlog before the kernel starts dropping them net.ipv4.tcp_max_syn_backlog = 3240000 # increase socket listen backlog net.core.somaxconn = 3240000 net.ipv4.tcp_max_tw_buckets = 1440000 # Increase TCP buffer sizes net.core.rmem_default = 8388608 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216
Debug log is completely empty, only when I manually cancel the download I get the following error
2015/02/09 20:05:32 [info] 4452#0: *2786 client prematurely closed connection while sending response to client, client: 83.11.xxx.xxx, server: xxx.com, request: "GET filename HTTP/1.1", host: "xxx.com"
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 1309M 100 1309M 0 0 374M 0 0:00:03 0:00:03 --:--:-- 382M
An answer for anyone here through Google:
Sendfile is blocking, and doesn’t enable nginx to set lookahead, thus it’s very inefficient if a file is only read once.
Sendfile relies on filesystem caching etc’ and was never made for such large files.
What you want is to disable sendfile for large files, and use directio (preferably with threads so it’s non-blocking) instead.
Any files under 16MB will still be read using sendfile.
aio threads; directio 16M; output_buffers 2 1M; sendfile on; sendfile_max_chunk 512k;
By using directio you read directly from the disk, skipping many steps on the way.
Please note that to use aio threads you need to compile nginx with threads support https://www.nginx.com/blog/thread-pools-boost-performance-9x/
You probably need to change
sendfile_max_chunk value, as the documentation states :
Syntax: sendfile_max_chunk size; Default: sendfile_max_chunk 0; Context: http, server, location
When set to a non-zero value, limits the amount of data that can be transferred in a single sendfile() call. Without the limit, one fast connection may seize the worker process entirely.
You may also want to adjust buffer sizes in case most of your traffic is “big” static files.
Have you tried tuning MTU (Maximum Transfer Unit) – the size of the largest network layer protocol data unit that can be communicated in a single network transaction? In our case, switching it from 1500 to 4000 bytes drastically improved the download performance. MTUs supported differs based on IP Transport. Try different values assessing what size makes sense in your use case.
You can use
ifconfig to check existing MTU size and use following command to update it at runtime:
ifconfig eth0 mtu 5000
Also visit this very useful article on all things How to transfer large amounts of data via network?