How To Build a Serverless CI/CD Pipeline On AWS

How To Build a Serverless CI/CD Pipeline On AWS” is a nice guide to some of the newer Amazon AWS services, targeted at developers and DevOps. It shows how to tie together the following:

  • Amazon EC2 (server instances)
  • Docker (containers)
  • Amazon ECR (Elastic Container Registry)
  • Amazon S3 (storage)
  • Amazon IAM (Identity and Access Management)
  • Amazon CodeBuild (Continuous Integration)
  • Amazon CodePipeline (Continuous Delivery)
  • Amazon CloudWatch (monitoring)
  • Amazon CloudTail (logs)

The examples in the article are for setting up the CI/CD pipeline for .NET, but they are easily adoptable for other development stacks.

Scheduled pipelines now available in Bitbucket Pipelines

BitBucket blog announces the support for scheduled Bitbucket Pipelines.  This is super cool and has been on the wishlist for a while now.  Here are a few examples of how this feature is useful:

  • Nightly builds that take longer to run
  • Daily or weekly deployments to a test environment
  • Data validation and backups
  • Load tests and tracking performance over time
  • Jobs and tasks that aren’t coupled to code changes

Making “Push on Green” a Reality

Making “Push on Green” a Reality is an insider look at how Google handles continuous deployment.  Very few teams and companies need to deal with such level of complexity, but the overall principals still probably apply.

Updating production software is a process that may require dozens, if not hundreds, of steps. These include creating and testing new code, building new binaries and packages, associating the packages with a versioned release, updating the jobs in production datacenters, possibly modifying database schemata, and testing and verifying the results. There are boxes to check and approvals to seek, and the more automated the process, the easier it becomes. When releases can be made faster, it is possible to release more often, and, organizationally, one becomes less afraid to “release early, release often”. And that’s what we describe in this article—making rollouts as easy and as automated as possible. When a “green” condition is detected, we can more quickly perform a new rollout. Humans are still needed somewhere in the loop, but we strive to reduce the purely mechanical toil they need to perform.

Continuous Integration Servers

build results

Here’s a list of Continuous Integration (CI) servers / solutions for those who is still trying to choose:

Via volkswagen.

From 15 hours to 15 seconds: reducing a crushing build time

From 15 hours to 15 seconds: reducing a crushing build time

In summary:

  • Bad Practice #1: We favoured integration tests over unit tests.
  • Bad Practice #2: We had many, many features that were relatively unimportant.
  • Bad Practice #3: Our integration tests were actually acceptance tests.
  • Bonus tip: run the build entirely on the tmpfs in-memory file system.

On builds and releases

Once in a while I find myself in a conversation on builds and releases.  It’s one of those where before the conversation everyone seems to be on the same page, but immediately after the conversation starts, there’s a massive fight and argument as to how the world works today and what’s the best path into the future.  And it gets messy.

I believe that the old approach of one release a decade is dead.  Especially in web application development.  The world is much more dynamic now, and so should be the release plans.  This seems obvious to many, and yet, not a lot of people understand the implication of this.  Making releases more dynamic means making the release operation cheaper, ideally – free.  Can you release a new version of the project once a day?  How about every hour? Why not?  You should be able to.  Regardless, whether you will actually release every second or not, the path to making releases cheap is automation.  And that means you have to have some form of software version control, and some form of build or deploy script.  And, of course, some form of rollback script for those times when things go hairy.

One of the things that I do at my current job is setting up such a deployment process.  I’ve done it before, but it’s been a while, and given how fast these things change and improve, I’ve been looking around for new tools and ideas.  While doing so, I came across an interesting GitHub blog post.  And while their requirements and environment are different from mine, I still found it useful.  One of the things that shows how well their process works is the stats at the end of the post.  Just look at them.

That’s about 100 deploys per day! Not bad, not bad at all.