This Amazon AWS blog post provides a great insight into the benefits of the cloud computing in general and Amazon AWS in particular. The whole thing is well worth the read, but here are a few of my favorite bits.
The grid grew to 61,299 Spot Instances (1.3 million vCPUs drawn from 34 instance types spanning 3 generations of EC2 hardware) as planned, with just 1,937 instances reclaimed and automatically replaced during the run, and cost $30,000 per hour to run, at an average hourly cost of $0.078 per vCPU. If the same instances had been used in On-Demand form, the hourly cost to run the grid would have been approximately $93,000.
The size of the Amazon AWS customers:
1.3 million vCPUs (5x the size of the largest on-premises grid)
The evolution of computing power over the last few years:
To give you a sense of the compute power, we computed that this grid would have taken the #1 position on the TOP 500 supercomputer list in November 2007 by a considerable margin, and the #2 position in June 2008. Today, it would occupy position #360 on the list.
Now, just for fun, exercise the idea of building something like this in house…
AWS Developer Blog ran this post a while back – “PHP application logging with Amazon CloudWatch Logs and Monolog“, in which they show how to use Monolog and Amazon CloudWatch together in any PHP application. It goes beyond a basic configuration of connecting the two, all the way into setting up log metrics, etc.
“Introducing the AWS Amplify GraphQL Client” showcases the new GraphQL client that was built by the Amazon Amplify team. It’s pretty sweet.
AWS News Blog covers the Registry of Open Data on AWS:
Almost a decade ago, my colleague Deepak Singh introduced the AWS Public Datasets in his post Paging Researchers, Analysts, and Developers. I’m happy to report that Deepak is still an important part of the AWS team and that the Public Datasets program is still going strong!
Today we are announcing a new take on open and public data, the Registry of Open Data on AWS, or RODA. This registry includes existing Public Datasets and allows anyone to add their own datasets so that they can be accessed and analyzed on AWS.
Currently, there are 53 data sets in the registry. Each provides a tonne of data. Subjects vary from satellite imagery and weather monitoring to political and financial information.
Hopefully, this will grow and expand with time.
One of the greatest things about the Amazon AWS services is that they save a tonne of time on the reinventing the wheel. There are numerous technologies out there and nobody has the time to dive deep, learn, and try all of them. Amazon AWS often provides ready-made templates and configurations for people who just want to try a technology or a tool, without investing too much time (and money) into figuring out all the options and tweaks.
“Get Started with Blockchain Using the new AWS Blockchain Templates” is one example of such predefined and pre-configured setup, for those who want to play around with Blockchain. Just think of how much time it would have taken somebody who just wants to spin up their own Etherium network with some basic tools and services just to check the technology out. With the predefined templates you can be up and running in minutes, and, once you are comfortable, you can spend more time rebuilding the whole thing, configuring and tweaking everything.
J Cole Morrison has this rather lengthy blog post on how to use CloudFoundation to simplify and automate the management of your Amazon AWS cloud infrastructure. AWS CloudFormation is a great tool, but it gets complex real fast with larger setups, so CloudFoundation comes to the rescue.
“Immutable Deployment @ Quorum” describes yet another approach to automated, and this case – immutable, deployments. This particular setup is slightly more on the SysAdmin/DevOps side rather than on the development side, utilizing tools like Ansible, Amazon EC2, and Amazon AMI.
If you are building very few projects, or projects with little variations, and use a whole instance for the project, than you should definitely check it out. For those people who work with a zoo of technologies and share the server between several projects, this approach probably won’t work so well. Unless it is adjusted to use containers instead of instances, but even then, it’ll probably won’t be optimal.
Gonzalo Ayuso throws a few snippets of code in the blog posts title “Handling Amazon SNS messages with PHP, Lumen and CloudWatch“, which shows how to work with Amazon SNS (Simple Notifications Service) and Amazon CloudWatch (cloud and network monitoring solution) from PHP. The examples are based on the Lumen micro-framework, which is basically a stripped down Laravel.
“7 ways to do containers on AWS” covers a variety of different ways to run containers on the Amazon AWS cloud infrastructure. These include most of the usual suspects, like Amazon Elastic Container Service (ECS), Amazon Elastic Container Service for Kubernetes (EKS), and hand-rolled vanilla containers on EC2, as well as a few lesser known ones like templated Kubernetes and Amazon Fargate.
This must be one of the greatest presentations on the Amazon AWS that I’ve ever seen. It uses a gradual approach – from small and simple to huge and complex. It covers a whole lot of different Amazon AWS services, how they compliment each other, at which stage and scale they become useful, and more.
Even quickly jumping through the slides gave me a lot to think (and Google) about.