Turns our old post about this is one of our most popular posts, who would have thought it! It was written by my predecessor Lee Parsons way back in 2013 so I’ve been asked to do an updated version for 2018.
Lee concluded that NGINX was 4.2 times faster than Apache overall. I think the difference will be less stark this time as Apache has had to make great gains in the intervening years to stay relevant. Apache is still used by 46% of websites overall as opposed to 39% NGINX however of the top 10,000 websites 64% use NGINX and 21% use Apache, so they must be doing something right!
I won’t be using a site that requires a lot of processing this time just a default Laravel 5.6 site, I think this will give a fairer comparison of the webservers. I will also be using a remote server for the tests rather than a local server. The sites will also be served with SSL as all sites should be using SSL these days.
So I created a new virtual server on our UKFast eCloud Hybrid server with the following settings:
- Ubuntu 16.04 x86_64
- 2 vCPUs
- 2GB RAM
- 10GB Hard disk
This should simulate a fairly standard webserver. A little more powerful than you’d get on free tier AWS but nothing particularly special.
I then installed Apache, NGINX and PHP7.2 using ‘apt install’ and ran `apt upgrade` to get everything else on the latest versions. Both web servers where in their default production configurations. The versions installed are as follows:
- Apache – 2.4.18
- NGINX – 1.10.3
- PHP-FPM – 7.2.5
MySQL was not installed as it won’t be required for these tests.
The SSL certificate was generated using Let’s Encrypt and Certbot. The virtual hosts have been configured using the Mozilla SSL Configuration Generator, modern option and got A+ rating in the Qualys SSL Labs test.
PHP-FPM has been left in it’s default production configuration. The webroot will be the same for both servers and during each test the other will be stopped, so they will both use port 80.
I then cloned my default Laravel install into the webroot. No files were changed from default, other than generating the APP_KEY environment variable, so PHP will simply be compiling the index template and sending the HTML to the webserver, both cache and session drivers in .env are set to ‘file’. The compiled template will have already been cached by the Blade template system before the test begins by visiting the URL in my browser to try to eliminate PHP’s speed from the tests and give each webserver as level playing field as possible.
Enough about the server, on to the tests.
I’ve used Apache Bench for testing, as it’s what was used in the previous post, and included the same metrics as before as well, although removed the failed requests metric as none of the requests failed. The tests were performed using Apache Bench (e.g. ab -n 500 -c 10 <url>).
The y-axis labels (e.g. 500/10) refer to the number of requests and the concurrency; i.e. 500/10 is five hundred requests with a concurrency of ten; i.e. five hundred requests were made for the URL with ten being made simultaneously. I’ve upped the number of requests and the concurrency because I think they were pretty low by today’s standards in the old post.
Connection time is the average number of milliseconds (ms) that it takes for the server to start sending data to the client from when the request is made. A lower value is better as this means that the client’s browser will be able to begin rendering the page more quickly. As you can see from the chart NGINX performs better until we get to 1000/200 where Apache is on average 114 ms quicker at responding.
Requests per second
Requests per second is the number of requests that were received by the client in a second. A higher value is better here as this means that more clients can be served more quickly at the same time. As you can see NGINX is marginally better, about 4% more requests per second.
Time per request
Time per request is the average number of milliseconds (ms) that each request took to connect, process and be sent and received by the client. A lower value is better here as it means that the client has to wait less time for their request to be completed. NGINX is the champion of this metric by a small margin. However the request time becomes unacceptably high above 500/100 for both webservers, this is where some kind of load balancing would be required.
Transfer rate is the speed at which data was sent from the server to the client, a higher value is better. Again NGINX wins but only just in most cases although it wins by a fair amount for the 500/100.
In conclusion NGINX is still the better choice for performance in most situations.
Personally I also find it a lot easier to configure virtual hosts with NGINX due to it’s more readable configuration files. Apache does have the advantage of it’s .htaccess distributed configuration but I rarely find it necessary to change the configuration of a virtual host once it’s working well and this probably contributes a little to the reduced performance (it has to check for these files in each directory). Another advantage of NGINX is that it can be used as a reverse proxy to serve up NodeJS powered sites with ease although this can be accomplished with Apache using the proxy_http module as well.
Neither server was configured for high concurrency specifically so YMMV when using either web server also PHP-FPM pools were not adjusted from defaults.
I did try running some very high concurrency tests (ab -n 5000 -c 500 <url>) but they produced too many SSL handshake failures on both servers. This might be something to be aware of if you’re expecting a very large concurrency on your site.