Slow download big static files from nginx

Posted on

Slow download big static files from nginx – Managing your servers can streamline the performance of your team by allowing them to complete complex tasks faster. Plus, it can enable them to detect problems early on before they get out of hand and compromise your business. As a result, the risk of experiencing operational setbacks is drastically lower.

But the only way to make the most of your server management is to perform it correctly. And to help you do so, this article will share nine tips on improving your server management and fix some problem about linux, networking, nginx, debian, vmware-esxi.

I’m using debian 7 x64 in vmware-esxi virtualization.

Max download per client is 1mb/s and nginx no use more than 50mbps together and my question is what may cause so slow transfers?


**Settings for eth1:
    Supported ports: [ TP ]
    Supported link modes:   1000baseT/Full

root@www:~# iostat
Linux 3.2.0-4-amd64 (www)       09.02.2015      _x86_64_        (4 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
       1,75    0,00    0,76    0,64    0,00   96,84

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda             173,93      1736,11       219,06     354600      44744

root@www:~# free -m
             total       used       free     shared    buffers     cached
Mem:         12048       1047      11000          0        106        442
-/+ buffers/cache:        498      11549
Swap:          713          0        713


user www-data;
worker_processes 4;
pid /var/run/;

events {
        worker_connections 3072;
        # multi_accept on;

http {

        # Basic Settings

        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 5;
        types_hash_max_size 2048;
        server_tokens off;

        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        # Logging Settings

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;

        # Gzip Settings

        gzip on;
        gzip_disable "msie6";

        # gzip_vary on;
        # gzip_proxied any;
        # gzip_comp_level 6;
        # gzip_buffers 16 8k;
        # gzip_http_version 1.1;
        # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

        # nginx-naxsi config
        # Uncomment it if you installed nginx-naxsi

        #include /etc/nginx/naxsi_core.rules;

        ## Start: Size Limits & Buffer Overflows ##

        client_body_buffer_size 1k;
        client_header_buffer_size 1k;
        client_max_body_size 4M;
        large_client_header_buffers 2 1k;

        ## END: Size Limits & Buffer Overflows ##

        ## Start: Timeouts ##

        client_body_timeout   10;
        client_header_timeout 10;
        send_timeout          10;

        ## End: Timeouts ##

        ## END: Size Limits & Buffer Overflof
        # nginx-passenger config
        # Uncomment it if you installed nginx-passenger

        #passenger_root /usr;
        #passenger_ruby /usr/bin/ruby;

        # Virtual Host Configs

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;


# Increase system IP port limits to allow for more connections

net.ipv4.ip_local_port_range = 2000 65000

net.ipv4.tcp_window_scaling = 1

# number of packets to keep in backlog before the kernel starts dropping them
net.ipv4.tcp_max_syn_backlog = 3240000

# increase socket listen backlog
net.core.somaxconn = 3240000
net.ipv4.tcp_max_tw_buckets = 1440000

# Increase TCP buffer sizes
net.core.rmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216


Debug log is completely empty, only when I manually cancel the download I get the following error

2015/02/09 20:05:32 [info] 4452#0: *2786 client prematurely closed connection while sending response to client, client:, server:, request: "GET filename HTTP/1.1", host: ""

curl output:

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 1309M  100 1309M    0     0   374M      0  0:00:03  0:00:03 --:--:--  382M

Solution :

An answer for anyone here through Google:

Sendfile is blocking, and doesn’t enable nginx to set lookahead, thus it’s very inefficient if a file is only read once.

Sendfile relies on filesystem caching etc’ and was never made for such large files.

What you want is to disable sendfile for large files, and use directio (preferably with threads so it’s non-blocking) instead.
Any files under 16MB will still be read using sendfile.

aio threads;
directio 16M;
output_buffers 2 1M;

sendfile on;
sendfile_max_chunk 512k;

By using directio you read directly from the disk, skipping many steps on the way.

Please note that to use aio threads you need to compile nginx with threads support

You probably need to change sendfile_max_chunk value, as the documentation states :

Syntax:   sendfile_max_chunk size;
Default:  sendfile_max_chunk 0;
Context:  http, server, location

When set to a non-zero value, limits the amount of data that can be transferred in a single sendfile() call. Without the limit, one fast connection may seize the worker process entirely.

You may also want to adjust buffer sizes in case most of your traffic is “big” static files.

Have you tried tuning MTU (Maximum Transfer Unit) – the size of the largest network layer protocol data unit that can be communicated in a single network transaction? In our case, switching it from 1500 to 4000 bytes drastically improved the download performance. MTUs supported differs based on IP Transport. Try different values assessing what size makes sense in your use case.

You can use ifconfig to check existing MTU size and use following command to update it at runtime:

ifconfig eth0 mtu 5000

Also visit this very useful article on all things How to transfer large amounts of data via network?

Leave a Reply

Your email address will not be published.