HAProxy SNI

HAProxy SNI” is pure gold! If you want to have a load balancer for HTTPS traffic, without managing SSL certificates on the said load balancer, there is a way to do so.

The approach is utilizing the Server Name Indication (SNI) extension to the TLS protocol.  I knew about it and I was already using it on the web server side, but it didn’t occur to me that it’ll be utilized on the load balancer.  Here’s the configuration bit:

frontend https *:443
  description Incoming traffic to port 443
  mode tcp
  tcp-request inspect-delay 5s
  tcp-request content accept if { req_ssl_hello_type 1 }
  use_backend backend-ssl-foobar if { req_ssl_sni -i foobar.com }
  use_backend backend-ssl-example if { req_ssl_sni -i example.com }
  default_backend backend-ssl-default

The above will make HAProxy listen on port 443, and then send all traffic for foobar.com to one backend, all traffic for example.com to another backend, and the rest to the third, default backend.

Charles – web debugging proxy application

Charles is a web debugging proxy application for Windows, Mac OS, and Linux.  Here’s a quick description from the project’s website:

Charles is an HTTP proxy / HTTP monitor / Reverse Proxy that enables a developer to view all of the HTTP and SSL / HTTPS traffic between their machine and the Internet. This includes requests, responses and the HTTP headers (which contain the cookies and caching information).

And here are some key features:

  • SSL Proxying – view SSL requests and responses in plain text
  • Bandwidth Throttling to simulate slower Internet connections including latency
  • AJAX debugging – view XML and JSON requests and responses as a tree or as text
  • AMF – view the contents of Flash Remoting / Flex Remoting messages as a tree
  • Repeat requests to test back-end changes
  • Edit requests to test different inputs
  • Breakpoints to intercept and edit requests or responses
  • Validate recorded HTML, CSS and RSS/atom responses using the W3C validator

Pretty much every browser these days comes with developer tools (like Google Chrome, for example).

But these are mostly useful for requests made by the browser itself.  Often, like depicted in “PHP and cURL: How WordPress makes HTTP requests” blog post from which I learned about Charles, one needs to examine requests made by the application itself – like WordPress in this particular case.

The developer tools of the browser won’t be very useful, but a proxy application like Charles would.  Setting up a proxy will send all requests through it, allowing for easy inspection and debugging.

Google and HTTPS

Here are some interesting news on the subject of Google and HTTPS:

In support of our work to implement HTTPS across all of our products (https://www.google.com/transparencyreport/https/) we have been operating our own subordinate Certificate Authority (GIAG2), issued by a third-party. This has been a key element enabling us to more rapidly handle the SSL/TLS certificate needs of Google products.

As we look forward to the evolution of both the web and our own products it is clear HTTPS will continue to be a foundational technology. This is why we have made the decision to expand our current Certificate Authority efforts to include the operation of our own Root Certificate Authority. To this end, we have established Google Trust Services (https://pki.goog/), the entity we will rely on to operate these Certificate Authorities on behalf of Google and Alphabet.

The process of embedding Root Certificates into products and waiting for the associated versions of those products to be broadly deployed can take time. For this reason we have also purchased two existing Root Certificate Authorities, GlobalSign R2 and R4. These Root Certificates will enable us to begin independent certificate issuance sooner rather than later.

We intend to continue the operation of our existing GIAG2 subordinate Certificate Authority.

If you need a bit of help putting this into perspective, this Hacker News thread has your back:

You can now have a website secured by a certificate issued by a Google CA, hosted on Google web infrastructure, with a domain registered using Google Domains, resolved using Google Public DNS, going over Google Fiber, in Google Chrome on a Google Chromebook. Google has officially vertically integrated the Internet.

The History of the URL

The History of the URL is a brilliant compilation of ideas and resources, explaining how we got to the URLs we use and love (or hate) today.  In fact, the article comes in two parts:

  1. Domain, protocol, and port
  2. Path, fragment, query, and auth

Read them in whatever order you prefer. But I guarantee that you’ll have a number of different responses through out, from “Wow! I never knew that” and “I would have never thought of that!” to “No way! I don’t believe it“.

And here is one of the bits that made me smile:

In 1996 Keith Shafer, and several others proposed a solution to the problem of broken URLs. The link to this solution is now broken. Roy Fielding posted an implementation suggestion in July of 1995. The link is now broken.

tus.io – open protocol for resumable file uploads

tus.io, in their own words:

People share more and more photos and videos every day. Mobile networks remain fragile however. Platform APIs are a mess and every project builds its own file uploader. There are a thousand one week projects that barely work, when all we need is one real project. One project done right.

We are going to do this right. We aim to solve the problem of unreliable file uploads once and for all. tus is a new open protocol for resumable uploads built on HTTP. It offers simple, cheap and reusable stacks for clients and servers. It supports any language, any platform and any network.