How To Build a Serverless CI/CD Pipeline On AWS

How To Build a Serverless CI/CD Pipeline On AWS” is a nice guide to some of the newer Amazon AWS services, targeted at developers and DevOps. It shows how to tie together the following:

  • Amazon EC2 (server instances)
  • Docker (containers)
  • Amazon ECR (Elastic Container Registry)
  • Amazon S3 (storage)
  • Amazon IAM (Identity and Access Management)
  • Amazon CodeBuild (Continuous Integration)
  • Amazon CodePipeline (Continuous Delivery)
  • Amazon CloudWatch (monitoring)
  • Amazon CloudTail (logs)

The examples in the article are for setting up the CI/CD pipeline for .NET, but they are easily adoptable for other development stacks.

S3 static site with SSL


s3-static-site

S3 static site with SSL and automatic deploys using Travis” is a goldmine of all those simple technologies tied into a single knot for an impressive result.  It has a bit of everything:

  • Jekyll – simple, blog-aware, static sites engine, for managing content.
  • GitHub – for version control of the site’s content and for triggering the deployment chain.
  • Travis CI – for testing changes, building and deploying a new version.
  • Amazon S3 – simple, cheap, web-enabled storage of static content.
  • Amazon CloudFront – simple, cheap, geographically-distributed content delivery network (CDN).
  • Amazon Route 53 – simple and cheap DNS hosting and domain management.
  • Amazon IAM – identity and access management for the Amazon Web Services (AWS).
  • Let’s Encrypt – free SSL/TLS certificate provider.

When put altogether, these bits allow one to have a fast (static content combined with HTTP 2 and top-level networking) and cheap (Jekyll, GitHub, Travis and Let’s Encrypt are free, with the rest of the services costing a few cents here and there) static website, with SSL and HTTP 2.

This is a classic example of how accessible and available is modern technology, if (and only if) you know what you are doing.




Trying out HashBackup with Amazon S3


These days I am once again improving my backup routines.  After I ran out of all reasonable space on my Dropbox account last year, I’ve moved to homemade rsync scripts and offsite backup downloads between my server and my laptop.  Obviously, with my laptop being limited on disk space, and not being always online, the situation was less than ideal.  And finally I grew tired of keeping it all running.

A fresh look around at backup software brought in a new application that I haven’t seen before – HashBackup.  It’s free, it has the simplest installation ever (statically compiled), it runs on every platform I care about and more, and it supports remote storage via pretty much any protocol.  It also features nice backup rotation plans and an interesting way of pushing backups to remote storage with sensible security.

Once I settled with the software, I had to sort out my disk space issue.  Full server backup takes about 15 GB and I want to keep a few of them around (daily, weekly, monthly, yearly, etc).  And I want to keep them off the server.  Not being too enthusiastic about having a home server on all the time, and not having enough space and uptime on my laptop, I’ve decided to check some of those storage solutions in the cloud.  Yeah, I know…

My choice fell upon Amazon S3.  Not for any particular reason either.  They seem to be cheap, fast, reliable and quite popular.  And HashBackup also supports them too.  So I’ve spent a couple of days (nights actually) configuring all to my liking and now I see the backups are running smoothly without any intervention on my end.

Before I will finalize my decision, I want to see the actual Amazon charge.  Their prices seem to be well within my budget, but there are many variables that I might be misinterpreting.   If they will charge what they say they will charge, I might free up much more space across all my computers, I think.

As far as tips go, I have two, if you decide to follow this path:

  1. When configuring HashBackup, you’ll find that documentation on the site is awesome.  However it will keep referring to dest.conf file that you’d use to configure remote destinations.  Example files are not part of online documentation, however, you’ll find a few example files (for each type of remote destination) in the software tarball, in the doc/ folder. 
  2. When configuring Amazon S3, you’d probably be tempted to have a more restrictive access policy then those offered by Amazon.  For instance, you’d probably want to limit access by folder, rather by bucket.  Word of advice: start with Amazon’s police first and make sure everything works.  Only then switch to your own custom policy.  Otherwise, you might spend too much time troubleshooting a wrong issue.