Secure Headers – a PHP library for easier management of browser security features

Modern browsers offer a variety of security mechanisms for web developers.  Unfortunately, some of these aren’t so easy to manage.  One needs a deep understanding of the functionality as well as theory behind.  Secure Headers is a library that makes all that work a lot easier for PHP developers.  Here are some of the features:

  • Add/remove and manage headers easily
  • Build a Content Security Policy, or combine multiple together
  • Content Security Policy analysis
  • Easy integeration with arbitrary frameworks (take a look at the HttpAdapter)
  • Protect incorrectly set cookies
  • Strict mode
  • Safe mode prevents accidental long-term self-DOS when using HSTS, or HPKP
  • Receive warnings about missing, or misconfigured security headers

Spellbook of Modern Web Dev

Spellbook of Modern Web Dev is a collection of 2,000+ carefully selected links to resources on anything web development related.  It covers subjects from Internet history and basics of HTML, CSS, and Javascript, all the way to tools, libraries and advanced usage of web technologies, and more; from network protocols and browser compatibility to development environments, containers, and ChatOps.

  • This document originated from a bunch of most commonly used links and learning resources I sent to every new web developer on our full-stack web development team.
  • For each problem domain and each technology, I try my best to pick only one or a few links that are most important, typical, common or popular and not outdated, base on the clear trendspublic data and empirical observation.
  • Prefer fine-grained classifications and deep hierarchies over featureless descriptions and distractive comments.
  • Ideally, each line is a unique category. The ” / “ symbol between the links means they are replaceable. The “, “symbol between the links means they are complementary.
  • I wish this document could be closer to a kind of knowledge graph or skill tree than a list or a collection.
  • It currently contains 2000+ links (projects, tools, plugins, services, articles, books, sites, etc.)

On one hand, this is one of the best single resources on the topic of web development that I’ve seen in a very long time.  On the other hand, it re-confirms my belief in “there is no such thing as a full-stack web developer”.  There’s just too many levels, and there’s too much depth to each level for a single individual to be an expert at.  But you get bonus points for trying.

HTTPS on Stack Overflow: The End of a Long Road

Way too often I hear rants from random people (unfortunately, many of them are also from the IT industry, with the deep understanding of the underlying issues) complaining about why company X or product Y doesn’t implement this or that feature.  As someone who has been involved a dozens, if not hundreds, of projects, I pretty much always can think of a number of reasons why even seemingly the simplest of features aren’t implemented for years.  These can vary from business side of things – insufficient budgets, strategic goals, and the like – to technical, such as architectural limitations, insufficient expertise, insufficient resources, etc.

One of the recent frequent rant that keeps coming up is “Why don’t they just enable HTTPS?”.  Again, as someone being involved in HTTPS setup for several different environments I can think of a number of reasons why.  SSL certificates used to cost money and were quite cumbersome to install until very recently.  Thanks to Let’s Encrypt effort, SSL certificates are now free and quite easy to issue and renew.  But that’s only part of the problem.  Enabling HTTPS requires infrastructural changes, and the more complex your infrastructure, the more changes are needed.  Just think of a few points here – web server configuration (especially when you have multiple web servers, with varied software (Apache, Nginx, IIS) and varied versions of that software), load balancers, web application firewalls, reverse proxies, caching servers, and so on.

Apart from the infrastructural changes, HTTPS often needs changes on the application level.  Caching, cookies, headers, making sure that all your resources are HTTPS-only, redirects, and the like.

All of the above issues are multiplied by a gadzillion, when your project is publicly available, used by tonnes of people, and provides embeddable content or APIs to third-party (hello, backward compatibility).

This is not to mention that HTTPS itself is a complex subject, not well understood by even the most experienced system administrators and developers.  There are different protocols and versions (SSL vs. TLS), cipher suites, handshakes, and protocol details.  Just have a look at the variety of checks and the report length done by Qualys’ SSL Labs Server Test.  Even giants like Google, who employ thousands of smart people, can’t get it all right.

But for some reason, people either don’t know or prefer to ignore all this complexities, and whine and cry anyway.

Recently, Stack Overflow – a well known collection of sites on a variety of technical subjects, has completed the migration to HTTPS everywhere.  These are also people with a lot of knowledge and expertise and with access to all the information.  Just have a look at their long way, which took not months, but years: HTTPS on Stack Overflow: The End of a Long Road.

Today, we deployed HTTPS by default on Stack Overflow. All traffic is now redirected to https:// and Google links will change over the next few weeks. The activation of this is quite literally flipping a switch (feature flag), but getting to that point has taken years of work. As of now, HTTPS is the default on all Q&A websites.

We’ve been rolling it out across the Stack Exchange network for the past 2 months. Stack Overflow is the last site, and by far the largest. This is a huge milestone for us, but by no means the end. There’s still more work to do, which we’ll get to. But the end is finally in sight, hooray!

So next time you are about to start crying about somebody not having feature X or Y, just give it a minute first.  Try to imagine what goes on on the other side.  You aren’t the only one with low budgets, pressing deadlines, insufficient knowledge, bad colleagues and horrible bosses…

Using the Strict-Transport-Security header

Julia Evans has an excellent write-up on “Using the Strict-Transport-Security header” – what it is, why you’d want to use it, and what are some of the consequences of using one.

As always with her blog posts, this one is very focused on one particular subject, easy to read, and explains things simply, so that the reader’s technical level is always irrelevant (OK, OK, you do need a basic understanding of how HTTP works, but not more than that).

HAProxy SNI

HAProxy SNI” is pure gold! If you want to have a load balancer for HTTPS traffic, without managing SSL certificates on the said load balancer, there is a way to do so.

The approach is utilizing the Server Name Indication (SNI) extension to the TLS protocol.  I knew about it and I was already using it on the web server side, but it didn’t occur to me that it’ll be utilized on the load balancer.  Here’s the configuration bit:

frontend https *:443
  description Incoming traffic to port 443
  mode tcp
  tcp-request inspect-delay 5s
  tcp-request content accept if { req_ssl_hello_type 1 }
  use_backend backend-ssl-foobar if { req_ssl_sni -i foobar.com }
  use_backend backend-ssl-example if { req_ssl_sni -i example.com }
  default_backend backend-ssl-default

The above will make HAProxy listen on port 443, and then send all traffic for foobar.com to one backend, all traffic for example.com to another backend, and the rest to the third, default backend.

Charles – web debugging proxy application

Charles is a web debugging proxy application for Windows, Mac OS, and Linux.  Here’s a quick description from the project’s website:

Charles is an HTTP proxy / HTTP monitor / Reverse Proxy that enables a developer to view all of the HTTP and SSL / HTTPS traffic between their machine and the Internet. This includes requests, responses and the HTTP headers (which contain the cookies and caching information).

And here are some key features:

  • SSL Proxying – view SSL requests and responses in plain text
  • Bandwidth Throttling to simulate slower Internet connections including latency
  • AJAX debugging – view XML and JSON requests and responses as a tree or as text
  • AMF – view the contents of Flash Remoting / Flex Remoting messages as a tree
  • Repeat requests to test back-end changes
  • Edit requests to test different inputs
  • Breakpoints to intercept and edit requests or responses
  • Validate recorded HTML, CSS and RSS/atom responses using the W3C validator

Pretty much every browser these days comes with developer tools (like Google Chrome, for example).

But these are mostly useful for requests made by the browser itself.  Often, like depicted in “PHP and cURL: How WordPress makes HTTP requests” blog post from which I learned about Charles, one needs to examine requests made by the application itself – like WordPress in this particular case.

The developer tools of the browser won’t be very useful, but a proxy application like Charles would.  Setting up a proxy will send all requests through it, allowing for easy inspection and debugging.

Google and HTTPS

Here are some interesting news on the subject of Google and HTTPS:

In support of our work to implement HTTPS across all of our products (https://www.google.com/transparencyreport/https/) we have been operating our own subordinate Certificate Authority (GIAG2), issued by a third-party. This has been a key element enabling us to more rapidly handle the SSL/TLS certificate needs of Google products.

As we look forward to the evolution of both the web and our own products it is clear HTTPS will continue to be a foundational technology. This is why we have made the decision to expand our current Certificate Authority efforts to include the operation of our own Root Certificate Authority. To this end, we have established Google Trust Services (https://pki.goog/), the entity we will rely on to operate these Certificate Authorities on behalf of Google and Alphabet.

The process of embedding Root Certificates into products and waiting for the associated versions of those products to be broadly deployed can take time. For this reason we have also purchased two existing Root Certificate Authorities, GlobalSign R2 and R4. These Root Certificates will enable us to begin independent certificate issuance sooner rather than later.

We intend to continue the operation of our existing GIAG2 subordinate Certificate Authority.

If you need a bit of help putting this into perspective, this Hacker News thread has your back:

You can now have a website secured by a certificate issued by a Google CA, hosted on Google web infrastructure, with a domain registered using Google Domains, resolved using Google Public DNS, going over Google Fiber, in Google Chrome on a Google Chromebook. Google has officially vertically integrated the Internet.

The History of the URL

The History of the URL is a brilliant compilation of ideas and resources, explaining how we got to the URLs we use and love (or hate) today.  In fact, the article comes in two parts:

  1. Domain, protocol, and port
  2. Path, fragment, query, and auth

Read them in whatever order you prefer. But I guarantee that you’ll have a number of different responses through out, from “Wow! I never knew that” and “I would have never thought of that!” to “No way! I don’t believe it“.

And here is one of the bits that made me smile:

In 1996 Keith Shafer, and several others proposed a solution to the problem of broken URLs. The link to this solution is now broken. Roy Fielding posted an implementation suggestion in July of 1995. The link is now broken.