How To Optimize Heavy Traffic WordPress

I’m running Varnish as a front-end to Nginx which is runningWordPress loaded with the W3-Total-Cache plugin. The W3-Total-Cache plugin is configured to use both memcached as well as Amazon S3 as its CDN. All of this sits on Ubuntu Linux .

Nginx

The first thing I did was dump Apache. I love Apache, don’t get me wrong, but I prefer to go with simple and fast if I have the option, and that’s what Nginx offers.

Again, I’m using Ubuntu, so the installations here are pretty clean.

# install nginx

aptitude install nginx

You’re also going to need to install php-fpm to cache PHP within Nginx.

# install php-fpm

aptitude install php-fpm

Configuration is handled by /etc/nginx/nginx.conf, where you’ll want to just do a few things:

 

# Miscellaneous Options
   sendfile        on;
   tcp_nopush     off;
   keepalive_timeout  30;
   tcp_nodelay        on;
   multi_accept     on;
   gzip  on;
   gzip_proxied any;
   gzip_comp_level 2;
   gzip_disable "MSIE [1-6].(?!.*SV1)";
gzip_types text/plain text/css application/x-javascript text/xml
 application/xml application/xml+rss text/javascript;

 

And then here’s the site config, under/etc/nginx/sites-enabled/default:

[ A couple of options have been omitted, but they should be self-explanatory, such as server name and the port you’re listening on. ]

 

## Default location
    location / {
        root   /var/www/;
        index  index index.php;
        try_files $uri/ $uri /index.php?q=$uri&$args;
        port_in_redirect off;
    }
    location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|xml)$ {
      access_log        off;
      expires           30d;
      root /var/www/;
    }
    location ~ .php$ {
        fastcgi_split_path_info ^(.+.php)(.*)$;
        fastcgi_pass   backend;
        fastcgi_index  index.php;
        fastcgi_param  SCRIPT_FILENAME  /var/www/$fastcgi_script_name;
        include fastcgi_params;
        fastcgi_param  QUERY_STRING     $query_string;
        fastcgi_param  REQUEST_METHOD   $request_method;
        fastcgi_param  CONTENT_TYPE     $content_type;
        fastcgi_param  CONTENT_LENGTH   $content_length;
        fastcgi_intercept_errors        on;
        fastcgi_ignore_client_abort     off;
        fastcgi_connect_timeout 60;
        fastcgi_send_timeout 180;
        fastcgi_read_timeout 180;
        fastcgi_buffer_size 128k;
        fastcgi_buffers 4 256k;
        fastcgi_busy_buffers_size 256k;
        fastcgi_temp_file_write_size 256k;
    }
    location ~ /.ht {
        deny  all;
    }
    location ~ /.git {
        deny  all;
    }
        location ~ /.svn {
            deny  all;
    }
upstream backend {
            server 127.0.0.1:9000;
    }

 

[ I’ve collected all these options from multiple sources–mostly from the official docs when possible–and have tweaked them through experimentation. Hopefully this will save you lots of time getting up and running with a decent config. ]

Varnish

So, Varnish is a wicked fast reverse proxy for serving up content. The idea is that if someone just requested something from your backend (Nginx), and it hasn’t been long, Varnish can serve it much faster than a full web server like Apache or even Nginx.

Varnish is a simple install as well when using Ubuntu.

# install varnish

aptitude install varnish

 

 backend default {
     .host = "localhost";
     .port = "8080";
}
acl purge {
        "localhost";
}
sub vcl_recv {
        if (req.request == "PURGE") {
                if (!client.ip ~ purge) {
                        error 405 "Not allowed.";
                }
                return(lookup);
        }
if (req.url ~ "^/$") {
               unset req.http.cookie;
            }
}
sub vcl_hit {
        if (req.request == "PURGE") {
                set obj.ttl = 0s;
                error 200 "Purged.";
        }
}
sub vcl_miss {
        if (req.request == "PURGE") {
                error 404 "Not in cache.";
        }
if (!(req.url ~ "wp-(login|admin)")) {
                        unset req.http.cookie;
                }
    if (req.url ~ "^/[^?]+.(jpeg|jpg|png|gif|ico|js|css|txt|gz|
zip|lzma|bz2|tgz|tbz|html|htm)(\?.|)$") {
       unset req.http.cookie;
       set req.url = regsub(req.url, "\?.$", "");
    }
    if (req.url ~ "^/$") {
       unset req.http.cookie;
    }
}
sub vcl_fetch {
        if (req.url ~ "^/$") {
                unset beresp.http.set-cookie;
        }
if (!(req.url ~ "wp-(login|admin)")) {
                        unset beresp.http.set-cookie;
}
}

 

So a key thing to realize about Varnish is that you need to get rid of cookies to see the benefit from it. If Varnish sees cookies flying back and forth, it’s going to assume there’s some sensitive functionality at play, and it’s not going to interfere. So part of this is saying that if you don’t see wp-login or wp-admin in the URL, strip the cookies.

The key to the main config here at the top is that Varnish sits on port 80 as your public presentation of your website to the world, and your “real” web server (Nginx in our case) sits behind it on another port, e.g. 8080.

Ok, so if we spin both Nginx and Varnish up at this point we’ll have a website, and it’ll be decently fast. What we’ve done so far is:

  • In our Nginx config we applied gzip compression, enabled keepalive, and allowed it to handle multiple requests at once (plus a few other settings)
  • In Varnish, we’ve stripped cookies from most WordPress requests (non-admin/login), as well as made it so that when we create or update a WordPress post it’ll refresh the Varnish cache appropriately so we don’t have a stale site while waiting for expiration. We’ve also removed the cookies from the front page within Varnish.

Now on to W3-Total-Cache.

W3-Total-Cache

In the mode of clean, tight configs with as little clutter as possible, I try to run as few WordPress plugins as possible. One of them is W3-Total-Cache. It’s simply phenomenal at speeding up either Apache or Nginx.

It gets its speed gains by combining a few techniques: Browser Caching, APC for PHP caching (reducing database lookups), and the use of a CDN. I use all three of those.

First we’ll install memcached on Ubuntu:

# install apc in Ubuntu

aptitude install apc

Then install W3-Total-Cache within WordPress and perform the following steps:

  1. Enable page caching (memcached)
  2. Enable the object cache (memcached)
  3. Enable the browser cache
  4. Do NOT enable the database cache (it’ll slow everything down)
  5. If your kung-fu is strong, enable the CDN functionality (you should be serving as much of your site from a CDN anyway–regardless of this plugin. I use Amazon S3)

[ I am actually not using W3 Total Cache now, as I’ve transfered its functionality to my nginx config and removed the plugin. Remember the principle of simplicity. ]

Benchmarks

Ok, so we now have Varnish acting as the front-end cache to Nginx, with tons of optimization happening at all layers. So the question is: How much did we improve things?

I use a myriad of tools to test web performance, but the two I’ll discuss here are Apache Bench and Which Loads Faster.

[ Don’t forget to restart everything before continuing. Here’s a script I use ]

# bounce all the web-related services

alias whup=”service mysql restart; /etc/init.d/nginx restart; /etc/init.d/php5-fpm restart

Apache Bench (ab)

ab is a tool for testing the performance of web servers. It sends mad requests to sites in the form of many concurrent connections. You can install ab on Ubuntu really easily by installing the Apache Utilities:

aptitude install apache-utils

You can then run ab against your site like so:

ab -kc 10 -n 1000 http://test.shineservers.in/

ab allows you to define a port as well, which gives you the option of testing Varnish + Nginx vs. Nginx directly.

ab -kc 10 -n 1000 http://test.shineservers.in:8080/

The second example here is a test of Nginx without Varnish. Keep in mind that you’ll have to lower your firewall to be able to connect in to a port other than 80. You do run iptables on your web server, right?

Anyway, here is what I get when hitting a default Apache-based WordPress install (still on Ubuntu and Linode, though) using 10 concurrent connections for 1000 hits (with keepalive):

 

Server Software:        Apache/2.2.16
Server Hostname:        somesite.com
Server Port:            80
Document Path:          /?p=5
Document Length:        8577 bytes
Concurrency Level:      10
Time taken for tests:   57.812 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Keep-Alive requests:    0
Total transferred:      8833000 bytes
HTML transferred:       8577000 bytes
Requests per second:    17.30 [#/sec] (mean)
Time per request:       578.119 [ms] (mean)
Time per request:       57.812 [ms] (mean, across all concurrent requests)
Transfer rate:          149.21 [Kbytes/sec] received
Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       59   60   0.9     60      88
Processing:   297  517  91.1    521    1055
Waiting:      236  456  90.9    461     995
Total:        357  576  91.2    581    1114
Percentage of the requests served within a certain time (ms)
  50%    581
  66%    616
  75%    637
  80%    651
  90%    689
  95%    722
  98%    753
  99%    777
 100%   1114 (longest request)

 

This isn’t horrible, with 95% of requests coming in around 755ms, but that’s a default site with virtually no content in it, e.g. no images, plugins, ads, etc.

Now compare that with my pure Nginx performance (without Varnish) withall those things slowing me down:

 

Server Software:        nginx/0.7.65
Server Hostname:        test.shineservers.in
Server Port:            81
Document Path:          /
Document Length:        0 bytes
Concurrency Level:      10
Time taken for tests:   17.816 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Non-2xx responses:      1000
Keep-Alive requests:    0
Total transferred:      279000 bytes
HTML transferred:       0 bytes
Requests per second:    56.13 [#/sec] (mean)
Time per request:       178.162 [ms] (mean)
Time per request:       17.816 [ms] (mean, across all concurrent requests)
Transfer rate:          15.29 [Kbytes/sec] received
Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       59   60   0.5     60      75
Processing:    97  109  12.3    104     172
Waiting:       97  109  12.3    104     172
Total:        157  168  12.3    163     232
Percentage of the requests served within a certain time (ms)
  50%    163
  66%    166
  75%    170
  80%    173
  90%    184
  95%    196
  98%    210
  99%    218
 100%    232 (longest request)

 

That’s stupid faster.

Now let’s try with Varnish added in (hitting port 80 instead of 8080):

 

Server Software:        nginx/0.7.65
Server Hostname:        test.shineservers.info
Server Port:            80
Document Path:          /
Document Length:        39435 bytes
Concurrency Level:      10
Time taken for tests:   7.530 seconds
Complete requests:      1000
Failed requests:        992
   (Connect: 0, Receive: 0, Length: 992, Exceptions: 0)
Write errors:           0
Keep-Alive requests:    1000
Total transferred:      39806014 bytes
HTML transferred:       39434008 bytes
Requests per second:    132.80 [#/sec] (mean)
Time per request:       75.301 [ms] (mean)
Time per request:       7.530 [ms] (mean, across all concurrent requests)
Transfer rate:          5162.33 [Kbytes/sec] received
Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    1   5.9      0      60
Processing:    60   74  23.2     61     332
Waiting:       60   70  12.7     61     151
Total:         60   75  28.0     61     391
Percentage of the requests served within a certain time (ms)
  50%     61
  66%     80
  75%     88
  80%     89
  90%     89
  95%     90
  98%    120
  99%    298
 100%    391 (longest request)

 

Duh-am. That’s 95% of the requests finishing in 90ms or less! Happiness.

Which Site Loads Faster

Just for giggles I like to use a site called whichloadsfaster.com, which graphically and side-by-side tests two sites against each other in terms of load speed. Here’s what I get if I compare my Varnish+Nginx against pure Nginx over 100 pulls (notice the port number in the fields at the top of the screenshot):

So, almost a 2.5x improvement in speed using Varnish + Nginx vs. just Nginx for loading my WordPress front page. And for hitting my non-database PHP content I am at around 25% faster (tests of /study not shown for brevity).

1 Response on this post

  1. Thanks for sharing your info. I truly appreciate your efforts and I will be
    waiting for your further write ups thanks once again.

Leave a Reply

Your email address will not be published. Required fields are marked *