Following is a quick guide to optimizing Nginx configuration, which should help if you expect significantly high traffic loads on your website.

ATTENTION: at the time of writing this post, latest stable version of Nginx is: 1.8.0. Instructions provided here are for that version and may or may not be applicable for later versions, if you are reading this post in a far future.

General Section

  1. worker_processes auto; (default is: 1)
    Optimal number of worker processes is usually equal to the number of CPU cores. Setting a higher value doesn’t normally improve performance, unless your workers do a lot of disk I/O. Latest versions of Nginx can automatically determine the optimal value if you set the configuration to ‘auto’, as shown above.
  2. worker_rlimit_nofile 10240;
    Changes the limit on the maximum number of open files (RLIMIT_NOFILE) for worker processes. Used to increase the limit without restarting the main process.

Events Handling (under events section)

  1. use epoll;
    @see: epoll
  2. multi_accept on; (default: off)
    If multi_accept is disabled, a worker process will accept one new connection at a time. Otherwise, a worker process will accept all new connections immediately.
  3. worker_connections 4096; (default is: 512)
    The maximum number of connections that each worker process can handle simultaneously. Increase this number if you need to. Testing required to determine optimal value.

Timeouts (all of these go under the ‘http’ section)

  1. keepalive_timeout 10; (default is: 75 seconds)
    This should be as close to your average response time as possible. Set it higher and you are wasting server resources, potentially: significantly affecting performance. Set it lower and you are not utilizing keep-alives on most of your requests, slowing down client. We assume that on a fast system, most requests return in under ~ 5seconds.
  2. keepalive_requests 1024; (default is: 100)
  3. client_header_timeout 10; (default is: 60 seconds)
    Defines a timeout for reading client request header. If a client does not transmit the entire header within this time, the 408 (Request Time-out) error is returned to the client.
  4. client_body_timeout 10; (default is: 60 seconds)
  5. send_timeout 10; (default is: 60 seconds)
    Sets a timeout for transmitting a response to the client. The timeout is set only between two successive write operations, not for the transmission of the whole response. If the client does not receive anything within this time, the connection is closed.
  6. sendfile on; (default: off)
    Sendfile optimizes serving large files, such as images and videos. When the setting is ‘on’ Nginx will use the kernel sendfile support instead of using its own resources.
  7. tcp_nopush on; (default: off)
    @see: tcp_nopush documentation
  8. tcp_nodelay on; (default: on)

GZipping Output

  1. gzip on;
  2. gzip_vary on;
  3. gzip_comp_level 2;
  4. gzip_buffers 4 8k;
  5. gzip_min_length 1024;
  6. gzip_proxied expired no-cache no-store private auth;
  7. gzip_types text/plain application/javascript application/x-javascript text/xml text/css application/json;
  8. gzip_disable "MSIE [1-6]\.";
    if for some unknown reason you still care about older IE versions. ¯_(ツ)_/¯

Caching Static Resources

If you are fingerprinting your static resources (you should be) then you can get very significant performance boost from letting clients cache these resources for a long time:

  1. This normally goes under your virtual host definition:

     location ~* .(ttf|mp3|mp4|webm|ogg|jpg|jpeg|png|gif|ico|css|js)$ {
         expires 365d;

Simple Abuse-Protection

DDOS is a sophisticated attack that has no universal defense and a reasonable defense takes dedicated solutions, but you can improve your chances by dropping naive attacks, with a configuration such as:

 limit_conn_zone $binary_remote_addr zone=addr:10m;
 limit_conn addr 100;

The first command allocates 10MB-large shared memory area that will keep track of metadata (key/values) of connections being made, and the second command uses it to limit connections from a single IP address to: 100. Thus you can effectively limit harm from people who may run load-generators from a small number of attack computers.

You can also add configurations such as:

# Request bodies larger than this will be written into temp files.
client_body_buffer_size  64k;

# same for header buffers
client_header_buffer_size 64k;
large_client_header_buffers 4 64k;

Open File Limits in Linux

Nginx can use up to 2 file descriptors per connection (keep-alive helps this, though). If you are running out of available file descriptors tweak following values in /etc/security/limits.conf:

soft nofile 10240
hard nofile 20480

Sysctl Configuration

Nginx connection queue can be affected by the directives in /etc/sysctl.conf which sets Linux defaults for queues, buckets, limit on open files and timeouts. You can tweak these by updating the ‘net.core.somaxconn’, ‘net.ipv4.tcp_max_tw_buckets’, ‘fs.file-max’ and ‘net.ipv4.tcp_fin_timeout’ settings. If you see error messages in the kernel log, increase these values until the errors stop. Sample values:

net.core.somaxconn = 65536
net.core.netdev_max_backlog – The rate at which packets are buffered by the network card before being handed off to the CPU. Increasing the value can improve performance on machines with a high amount of bandwidth. Check the kernel log for errors related to this setting, and consult the network card documentation for advice on changing it.
net.ipv4.tcp_max_tw_buckets = 1440000
fs.file-max = 20480
net.ipv4.tcp_fin_timeout 15 # default: 60 sec. Shorter values allow free-ing up temporart porst (used for backend connections) faster. Important for high loads.

Once you change the file you will need to run: sudo sysctl -p.

Please note that “open file” is not the same as “file descriptor” so sysctl settings are not the same as setting in the limits.conf.

You can tune more settings (e.g. net.core.netdev_max_backlog) in sysctl.conf to improve TCP performance, if you need to. ESNet has couple of in-depth articles on the subject:

Bonus: PHP/FPM Tuning

If you are using Nginx as a proxy for PHP/FPM, there are additional, helpful tuning instructions at that you probably will be interested in.