Nginx blog (which, if you work with Nginx in any capacity, you should subscribe to) has an excellent guide to rate limiting. The article explains rate limiting from the basics, through bursts, all the way to more advanced examples, with multiple rate limits for the same location.
My brother wrote a follow-up – HAProxy abuse filtering and rate limiting – to his previous post – Nginx rate limit by user agent (control bots). This is just a tip of the iceberg that we are working with at the office, but it’s pretty cool.
Hopefully, soon enough our Ansible playbooks will be up to date and shareable…
Somehow I missed the announcement of the Nginx Amplify (beta) back in November of last year, so here it goes now.
Nginx Amplify is a new tool for the comprehensive monitoring of Nginx web servers. Here’s what it can do for you:
- Visually identify performance bottlenecks, overloaded servers, or potential DDoS attacks
- Improve and optimize NGINX performance with intelligent advice and recommendations
- Get alerts when something is wrong with the delivery of your application
- Plan capacity and performance for web applications
- Keep track of systems running NGINX
as the regular proactive monitoring of the Nginx issues. Have a look at the documentation for more details.
“504 Gateway Timeout” error is a very common issue when using Nginx with PHP-FPM. Usually, that means that it took PHP-FPM longer to generate the response, than Nginx was willing to wait for. A few possible reasons for this are:
- Nginx timeout configuration uses very small values (expecting the responses to be unrealistically fast).
- The web server is overloaded and takes longer than it should to process requests.
- The PHP application is slow (maybe due to database behind it being or slow).
There is plenty advice online on how to troubleshoot and sort these issues. But when it comes down to increasing the timeouts, I found such advice to be scattered, incomplete, and often outdated. This page, however, has a good collection of tweaks. They are:
- Increase PHP maximum execution time in /etc/php.ini: max_execution_time = 300
- Increase PHP-FPM request terminate timeout in the pool configuration (/etc/php-fpm.d/www.conf): request_terminate_timeout = 300
- Increase Nginx FastCGI read timeout (in /etc/nginx/nginx.conf): fastcgi_read_timeout 300;
Also, see this Stack Overflow thread for more suggestions.
P.S.: while you are sorting out your HTTP errors, have a quick look at HTTP Status Dogs, which I blogged about a while back.
The last few weeks were super busy at work, so I accidentally let a few SSL certificates expire. Renewing them is always annoying and time consuming, so I was pushing it until the last minute, and then some.
Instead of going the usual way for the renewal, I decided to try to the Let’s Encrypt deal. (I’ve covered Let’s Encrypt before here and here.) Basically, Let’s Encrypt is a new Certification Authority, created by Electronic Frontier Foundation (EFF), with the backing of Google, Cisco, Mozilla Foundation, and the like. This new CA is issuing well recognized SSL certificates, for free. Which is good. But the best part is that they’ve setup the process to be as automated as possible. All you need is to run a shell command to get the certificate and then another shell command in the crontab to renew the certificate automatically. Certificates are only issued for 3 months, so you’d really want to have them automatically updated.
It took me longer than I expected to figure out how this whole thing works, but that’s because I’m not well versed in SSL, and because they have so many different options, suited for different web servers, and different sysadmin experience levels.
Eventually I made it work, and here is the complete process, so that I don’t have to figure it out again later.
We are running a mix of CentOS 7 and Amazon AMI servers, using both Nginx and Apache. Here’s what I had to do.
First things first. Install the Let’s Encrypt client software. Supposedly there are several options, but I went for the official one. Manual way:
# Install requirements yum install git bc cd /opt git clone https://github.com/certbot/certbot letsencrypt
Alternatively, you can use geerlingguy’s lets-encrypt-role for Ansible.
Secondly, we need to get a new certificate. As I said before, there are multiple options here. I decided to use the certonly way, so that I have better control over where things go, and so that I would minimize the web server downtime.
There are a few things that you need to specify for the new SSL certificate. These are:
- The list of domains, which the certificate should cover. I’ll use example.com and www.example.com here.
- The path to the web folder of the site. I’ll use /var/www/vhosts/example.com/
- The email address, which Let’s Encrypt will use to contact you in case there is something urgent. I’ll use email@example.com here.
Now, the command to get the SSL certificate is:
/opt/letsencrypt/certbot-auto certonly --webroot --email firstname.lastname@example.org --agree-tos -w /var/www/vhosts/example.com/ -d example.com -d www.example.com
When you run this for the first time, you’ll see that a bunch of additional RPM packages will be installed, for the virtual environment to be created and used. On CentOS 7 this is sufficient. On Amazon AMI, the command will run, install things, and will fail with something like this:
WARNING: Amazon Linux support is very experimental at present... if you would like to work on improving it, please ensure you have backups and then run this script again with the --debug flag!
This is useful, but insufficient. Before you can run successfully, you’ll also need to do the following:
yum install python26-virtualenv
Once that is done, run the certbot command with the –debug parameter, like so:
/opt/letsencrypt/certbot-auto certonly --webroot --email email@example.com --agree-tos -w /var/www/vhosts/example.com/ -d example.com -d www.example.com --debug
This should produce a success message, with “Congratulations!” and all that. The path to your certificate (somewhere in /etc/letsencrypt/live/example.com/) and its expiration date will be mentioned too.
If you didn’t get the success message, make sure that:
- the domain, for which you are requesting a certificate, resolves back to the server, where you are running the certbot command. Let’s Encrypt will try to access the site for verification purposes.
- that public access is allowed to the /.well-known/ folder. This is where Let’s Encrypt will store temporary verification files. Note that the folder starts with dot, which in UNIX means hidden folder, which are often denied access to by many web server configurations.
Just drop a simple hello.txt to the /.well-known/ folder and see if you can access it with the browser. If you can, then Let’s Encrypt shouldn’t have any issues getting you a certification. If all else fails, RTFM.
Now that you have the certificate generated, you’ll need to add it to the web server’s virtual host configuration. How exactly to do this varies from web server to web server, and even between the different versions of the same web server.
For Apache version >= 2.4.8 you’ll need to do the following:
SSLEngine on SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem SSLCertificateFile /etc/letsencrypt/live/example.com/fullchain.pem
For Apache version < 2.4.8 you’ll need to do the following:
SSLEngine on SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem SSLCertificateFile /etc/letsencrypt/live/example.com/cert.pem SSLCertificateChainFile /etc/letsencrypt/live/example.com/chain.pem
For Nginx >= 1.3.7 you’ll need to do the following:
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
You’ll obviously need the additional SSL configuration options for protocols, ciphers and the like, which I won’t go into here, but here are a few useful links:
- Apache Configuration Example
- How To Secure Nginx with Let’s Encrypt on Ubuntu 14.04
- Certbot User Guide
- Using the webroot domain verification method
- Guide to Deploying Diffie-Hellman for TLS
Once your SSL certificate is issued and web server is configured to use it, all you need is to add an entry to the crontab to renew the certificates which are expiring in 30 days or less. You’ll only need a single entry for all your certificates on this machine. Edit your /etc/crontab file and add the following (adjust for your web server software, obviously):
# Renew Let's Encrypt certificates at 6pm every Sunday 0 18 * * 0 root (/opt/letsencrypt/certbot-auto renew && service httpd restart)
That’s about it. Once all is up and running, verify and adjust your SSL configuration, using Qualys SSL Labs excellent tool.
Cipherli.st – provides ready to use cipher configurations for a variety of applications, such as Apache, Nginx, Lighttpd, HAProxy, Exim, Postfix, Dovecot, OpenSSH, and others. This is a huge time-saver for those of us not well versed in cryptography and security.
In a recent project I crashed into a wall. At least for a couple of days that is. The requirement was to integrate the Request Tracker (aka RT) installation on CentOS 7 server with Nginx to a client’s company single sign-on solution. Which wasn’t LDAP. Or Active Directory. Or anything standard at all – a complete homegrown system.
Inside NGINX: How We Designed for Performance & Scale
SSO with Nginx auth_request module – SSO as in Single Sign-On. Absolutely beautiful solution for one set of requirements, and a horrendous for another. Worth knowing though.
Here is an idea to try on a slow weekend: Nginx and Memcached, a 400% boost!
Memcached, the darling of every web-developer, is capable of turning almost any application into a speed-demon. Benchmarking one of my own Rails applications resulted in ~850 req/s on commodity, non-optimized hardware – more than enough in the case of this application. However, what if we took Mongrel out of the equation? Nginx, by default, comes prepackaged with the Memcached module, which allows us to bypass the Mongrel servers and talk to Memcached directly. Same hardware, and a quick test later: ~3,550 req/s, or almost a 400% improvement! Not bad for a five minute tweak!