Here is a handy blog post that shows how to simplify the installation and running of the Amazon AWS command line commands, using Docker. With the Dockerfile like this:
ENV AWS_DEFAULT_REGION='[your region]'
ENV AWS_ACCESS_KEY_ID='[your access key id]'
ENV AWS_SECRET_ACCESS_KEY='[your secret]'
RUN pip install awscli
One can build the image and run the container as follows:
$ docker build -t gnschenker/awscli
$ docker push gnschenker/awscli:latest
$ docker run -it --rm -e AWS_DEFAULT_REGION='[your region]' -e AWS_ACCESS_KEY_ID='[your access ID]' -e AWS_SECRET_ACCESS_KEY='[your access key]' gnschenker/awscli:latest
Obviously, DO NOT hardcode your Amazon AWS credentials into an image, which will be publicly available through DockerHub.
Once the AWS CLI works for you, you can add the command to your bash aliases, to make things even easier.
I’ve been meaning to look into Docker for a long while now. But, as always, time is the issue. In the last couple of days though I’ve been integrating BitBucket Pipelines into our workflow. BitBucket Pipelines is a continuous integration solution, which runs your project tests in a Docker container. So, naturally, I had to get a better idea of how the whole thing works.
“Docker for PHP Developers” article was super useful. Even though it wasn’t immediately applicable to BitBucket Pipelines, as they don’t currently support multiple containers – everything has to run within a single container.
The default BitBucket Pipelines configuration suggests the phpunit/phpunit image. If you want to run PHPUnit tests only, that works fine. But if you want to have a full blown Nginx and MySQL setup for extra bits (UI tests, integration tests, etc), then you might find smartapps/bitbucket-pipelines-php-mysql image much more useful. Here’s the full bitbucket-pipelines.yml file that I’ve ended up with.
There is this discussion over at StackOverflow: Should I use Vagrant or Docker for creating an isolated environment? It attracted the attention of the authors of both projects (as well as many other smart people). Read the whole thing for interesting insights into what’s there now and what’s coming. If you’d rather have a summary, here it is:
The short answer is that if you want to manage machines, you should use Vagrant. And if you want to build and run applications environments, you should use Docker.
With the recent explosion in the virtualization and container technologies, one is often left disoriented. Questions like “should I use virtual machines or containers?”, “which technology should I use”, and “can I migrate from one to another later?” are just some of those that will need answering.
Here is an open source tool that helps to avoid a few of those questions – Packer (by HashiCorp):
Packer is a tool for creating machine and container images for multiple platforms from a single source configuration.
Have a look at the supported platforms:
- Amazon EC2 (AMI). Both EBS-backed and instance-store AMIs within EC2, optionally distributed to multiple regions.
- DigitalOcean. Snapshots for DigitalOcean that can be used to start a pre-configured DigitalOcean instance of any size.
- Docker. Snapshots for Docker that can be used to start a pre-configured Docker instance.
- Google Compute Engine. Snapshots for Google Compute Engine that can be used to start a pre-configured Google Compute Engine instance.
- OpenStack. Images for OpenStack that can be used to start pre-configured OpenStack servers.
- Parallels (PVM). Exported virtual machines for Parallels, including virtual machine metadata such as RAM, CPUs, etc. These virtual machines are portable and can be started on any platform Parallels runs on.
- QEMU. Images for KVM or Xen that can be used to start pre-configured KVM or Xen instances.
- VirtualBox (OVF). Exported virtual machines for VirtualBox, including virtual machine metadata such as RAM, CPUs, etc. These virtual machines are portable and can be started on any platform VirtualBox runs on.
- VMware (VMX). Exported virtual machines for VMware that can be run within any desktop products such as Fusion, Player, or Workstation, as well as server products such as vSphere.
The only question remaining now, it seems, is “why wouldn’t you use it?”. :)
Containers (Docker, et al) have been getting all the hype recently. I’ve played around with these a bit, but I’m not yet convinced this is the next greatest thing for projects that I am involved with currently. However, it helps to look at these from different perspectives. Here’s a blog post that ties containers to a new term that I haven’t heard before – algorithm economy.
The “algorithm economy” is a term established by Gartner to describe the next wave of innovation, where developers can produce, distribute, and commercialize their code. The algorithm economy is not about buying and selling complete apps, but rather functional, easy to integrate algorithms that enable developers to build smarter apps, quicker and cheaper than before.