HAProxy SNI

HAProxy SNI” is pure gold! If you want to have a load balancer for HTTPS traffic, without managing SSL certificates on the said load balancer, there is a way to do so.

The approach is utilizing the Server Name Indication (SNI) extension to the TLS protocol.  I knew about it and I was already using it on the web server side, but it didn’t occur to me that it’ll be utilized on the load balancer.  Here’s the configuration bit:

frontend https *:443
  description Incoming traffic to port 443
  mode tcp
  tcp-request inspect-delay 5s
  tcp-request content accept if { req_ssl_hello_type 1 }
  use_backend backend-ssl-foobar if { req_ssl_sni -i foobar.com }
  use_backend backend-ssl-example if { req_ssl_sni -i example.com }
  default_backend backend-ssl-default

The above will make HAProxy listen on port 443, and then send all traffic for foobar.com to one backend, all traffic for example.com to another backend, and the rest to the third, default backend.

Why Configuration Management and Provisioning are Different

In “Why Configuration Management and Provisioning are Different” Carlos Nuñez advocates for the use of specialized infrastructure provisioning tools, like Terraform, Heat, and CloudFormation, instead of relying on the configuration management tools, like Ansible or Puppet.

I agree with his argument for the rollbacks, but not so much for the maintaining state and complexity.  However I’m not yet comfortable to word my disagreement – my head is all over the place with clouds, and I’m still weak on the terminology.

The article is nice regardless, and made me look at the provisioning tools once again.

How to monitor your Linux servers with nmon

How to monitor your Linux servers with nmon” article provides some details on how to use the comprehensive server monitoring tool “nmon” (Nigel’s Monitor) to keep an eye on your server or two.  If you have more than a handful of servers, you’d probably opt out for a full blown monitoring solution, like Zabbix, but even with that, nmon can be useful for quick troubleshooting, screenshots, and data collection.

I’ve heard of nmon before and even used it occasionally.  What I didn’t know was that it can collect system metrics into a file, which can then later be analyzed and graphed with the nmonchart tool.

That’s pretty handy.  The extra bonus is that these tools are available in most Linux distributions, so there is no need to download/compile/configure things.

 

Nginx Amplify : comprehensive Nginx monitoring

Somehow I missed the announcement of the Nginx Amplify (beta) back in November of last year, so here it goes now.

Nginx Amplify is a new tool for the comprehensive monitoring of Nginx web servers.  Here’s what it can do for you:

  • Visually identify performance bottlenecks, overloaded servers, or potential DDoS attacks
  • Improve and optimize NGINX performance with intelligent advice and recommendations
  • Get alerts when something is wrong with the delivery of your application
  • Plan capacity and performance for web applications
  • Keep track of systems running NGINX

As well

as the regular proactive monitoring of the Nginx issues.  Have a look at the documentation for more details.

GitHub pricing : Business

GitHub has yet another update to their pricing options.  Business plans have been launched with support for SAML single sign-on, 99.95% uptime SLA, 24×5 support with 8 hour response, and more.

Unfortunately it still counts external contributors as users in the account, which makes it too expensive for my organizations, but it’s good to see them trying.

Fixing outdated Let’s Encrypt (zope.interface error)

I’ve started using Let’s Encrypt for the SSL certificates a while back.  I installed it on all the web servers, irrelevant of the need for SSL, just to have it there, when I need it (thanks to this Ansible role).  One of those old web servers needed an SSL certificate recently, so I thought it’d be no problem to generate one.

But I was wrong. The letsencrypt-auto tool got outdated and was failing to execute, throwing some Python exception about missing zope.interface module.  A quick Google search brought this StackOverflow discussion, with the exact issue I was having.

Traceback (most recent call last):
  File "/root/.local/share/letsencrypt/bin/letsencrypt", line 7, in <module>
    from certbot.main import main
  File "/root/.local/share/letsencrypt/local/lib/python2.7/dist-packages/certbot/main.py", line 12, in <module>
    import zope.component
  File "/root/.local/share/letsencrypt/local/lib/python2.7/dist-packages/zope/component/__init__.py", line 16, in <module>
    from zope.interface import Interface
ImportError: No module named interface

However, the solution didn’t fix the problem for me:

unset PYTHON_INSTALL_LAYOUT
/opt/letsencrypt/letsencrypt-auto -v

Even pulling the updated version from the GitHub repository didn’t solve it.

After poking around for a while more, I found this bug report from the last year, which solved my problem.

I recommend:

  1. Running rm -rf /root/.local/share/letsencrypt. This removes your installation of letsencrypt, but keeps all configuration files, certificates, logs, etc.
  2. Make sure you have an up to date copy of letsencrypt-auto. It can be found here.
  3. Run letsencrypt-auto again.

If you get the same behavior, you can try installing zope.interface manually by running:

/root/.local/share/letsencrypt/bin/pip install zope.interface

Hopefully, next time I’ll remember to search my blog’s archives …

Downdetector – a weatherman for the digital world

Downdetector is yet another one of those services that monitor major web services and provides and lets you see if any of them is experiencing any issues or outages.

You can search for specific providers or browse by company or issue type.  There’s also a weekly top 10.  What I like in particular are comments for each report, where you can get some feedback from other users experiencing the problem.

 

Google and HTTPS

Here are some interesting news on the subject of Google and HTTPS:

In support of our work to implement HTTPS across all of our products (https://www.google.com/transparencyreport/https/) we have been operating our own subordinate Certificate Authority (GIAG2), issued by a third-party. This has been a key element enabling us to more rapidly handle the SSL/TLS certificate needs of Google products.

As we look forward to the evolution of both the web and our own products it is clear HTTPS will continue to be a foundational technology. This is why we have made the decision to expand our current Certificate Authority efforts to include the operation of our own Root Certificate Authority. To this end, we have established Google Trust Services (https://pki.goog/), the entity we will rely on to operate these Certificate Authorities on behalf of Google and Alphabet.

The process of embedding Root Certificates into products and waiting for the associated versions of those products to be broadly deployed can take time. For this reason we have also purchased two existing Root Certificate Authorities, GlobalSign R2 and R4. These Root Certificates will enable us to begin independent certificate issuance sooner rather than later.

We intend to continue the operation of our existing GIAG2 subordinate Certificate Authority.

If you need a bit of help putting this into perspective, this Hacker News thread has your back:

You can now have a website secured by a certificate issued by a Google CA, hosted on Google web infrastructure, with a domain registered using Google Domains, resolved using Google Public DNS, going over Google Fiber, in Google Chrome on a Google Chromebook. Google has officially vertically integrated the Internet.

Immutable Infrastructure with AWS and Ansible

Immutable infrastructure is a very powerful concept that brings stability, efficiency, and fidelity to your applications through automation and the use of successful patterns from programming.  The general idea is that you never make changes to running infrastructure.  Instead, you ensure that all infrastructure is created through automation, and to make a change, you simply create a new version of the infrastructure, and destroy the old one.

“Immutable Infrastructure with AWS and Ansible” is a, so far, three part article series (part 1, part 2, part 3), that shows how to use Ansible to achieve an immutable infrastructure on the Amazon Web Services cloud solution.

It covers everything starting from the basic setup of the workstation to execute Ansible playbooks and all the way to AWS security (users, roles, security groups), deployment of resources, and auto-scaling.