“Persisting state between AWS EC2 spot instances” is a handy guide into using Amazon EC2 spot instances instead of on-demand or reserved instances and preserving the state of the instance between terminations. This is not something that I’ve personally tried yet, but with the ever-growing number of instances I managed on the AWS, this definitely looks like an interesting approach.
- This document originated from a bunch of most commonly used links and learning resources I sent to every new web developer on our full-stack web development team.
- For each problem domain and each technology, I try my best to pick only one or a few links that are most important, typical, common or popular and not outdated, base on the clear trends, public data and empirical observation.
- Prefer fine-grained classifications and deep hierarchies over featureless descriptions and distractive comments.
- Ideally, each line is a unique category. The ” / “ symbol between the links means they are replaceable. The “, “symbol between the links means they are complementary.
- I wish this document could be closer to a kind of knowledge graph or skill tree than a list or a collection.
- It currently contains 2000+ links (projects, tools, plugins, services, articles, books, sites, etc.)
On one hand, this is one of the best single resources on the topic of web development that I’ve seen in a very long time. On the other hand, it re-confirms my belief in “there is no such thing as a full-stack web developer”. There’s just too many levels, and there’s too much depth to each level for a single individual to be an expert at. But you get bonus points for trying.
The Moby Project is a new open-source project to advance the software containerization movement and help the ecosystem take containers mainstream. It provides a library of components, a framework for assembling them into custom container-based systems and a place for all container enthusiasts to experiment and exchange ideas.
This just had to happen, given the nature of the Open Source and the importance of the container technology for the modern infrastructure.
Here are some exciting news from the BitBucket Pipelines blog: Bitbucket Pipelines now supports building Docker images, and service containers for database testing.
We developed Pipelines to enable teams to test and deploy software faster, using Docker containers to manage their build environment. Now we’re adding advanced Docker support – building Docker images, and Service containers for database testing.
Federacy has an interesting research in Docker image vulnerabilities. The bottom line is:
24% of latest Docker images have significant vulnerabilities
This can and should be improved, especially given the whole hierarchical structure of Docker images. It’s not like improving security of all those random GitHub repositories.
10 things to avoid in Docker containers provides a handy reminder of what NOT to do when building Docker containers. Read the full article for details and explanations. For a brief summary, here are the 10 things:
- Don’t store data in containers
- Don’t ship your application in two pieces
- Don’t create large images
- Don’t use a single layer image
- Don’t create images from running containers
- Don’t use only the “latest” tag
- Don’t run more than one process in a single container
- Don’t store credentials in the image. Use environment variables
- Don’t run processes as a root user
- Don’t rely on IP addresses
It’s a well known fact that I am not the greatest fan of Microsoft and their technologies. I’ve been bitten many a time through the years. And not even them becoming a Platinum Partner in the Linux Foundation can change my attitude towards them. It’s just been too much pain, and scars, and tears, and sweat.
But the way life is, once in a while, I just have to work with or around them. Recently, for example, at work, we’ve done a project that just had to use MS SQL Server and there was no way to get around it. Gladly, I managed to find just the right image on the Amazon AWS Marketplace, and spin a new EC2 instance for testing. The local development was difficult, but at least we had a place to test stuff before sending it off to the customer.
Here is a handy blog post that shows how to simplify the installation and running of the Amazon AWS command line commands, using Docker. With the Dockerfile like this:
FROM python:2.7 ENV AWS_DEFAULT_REGION='[your region]' ENV AWS_ACCESS_KEY_ID='[your access key id]' ENV AWS_SECRET_ACCESS_KEY='[your secret]' RUN pip install awscli CMD /bin/bash
One can build the image and run the container as follows:
$ docker build -t gnschenker/awscli $ docker push gnschenker/awscli:latest $ docker run -it --rm -e AWS_DEFAULT_REGION='[your region]' -e AWS_ACCESS_KEY_ID='[your access ID]' -e AWS_SECRET_ACCESS_KEY='[your access key]' gnschenker/awscli:latest
Obviously, DO NOT hardcode your Amazon AWS credentials into an image, which will be publicly available through DockerHub.
Once the AWS CLI works for you, you can add the command to your bash aliases, to make things even easier.
I’ve been meaning to look into Docker for a long while now. But, as always, time is the issue. In the last couple of days though I’ve been integrating BitBucket Pipelines into our workflow. BitBucket Pipelines is a continuous integration solution, which runs your project tests in a Docker container. So, naturally, I had to get a better idea of how the whole thing works.
“Docker for PHP Developers” article was super useful. Even though it wasn’t immediately applicable to BitBucket Pipelines, as they don’t currently support multiple containers – everything has to run within a single container.
The default BitBucket Pipelines configuration suggests the phpunit/phpunit image. If you want to run PHPUnit tests only, that works fine. But if you want to have a full blown Nginx and MySQL setup for extra bits (UI tests, integration tests, etc), then you might find smartapps/bitbucket-pipelines-php-mysql image much more useful. Here’s the full bitbucket-pipelines.yml file that I’ve ended up with.
There is this discussion over at StackOverflow: Should I use Vagrant or Docker for creating an isolated environment? It attracted the attention of the authors of both projects (as well as many other smart people). Read the whole thing for interesting insights into what’s there now and what’s coming. If you’d rather have a summary, here it is:
The short answer is that if you want to manage machines, you should use Vagrant. And if you want to build and run applications environments, you should use Docker.