Cloud Academy Blog goes over top 13 Amazon VPC best practices – particularly good for those just starting up with the platform. The article discusses the following:
- Choosing the Proper VPC Configuration for Your Organization’s Needs
- Choosing a CIDR Block for Your VPC Implementation
- Isolating Your VPC Environments
- Securing Your Amazon VPC Implementation
- Creating Your Disaster Recovery Plan
- Traffic Control and Security
- Keep your Data Close
- VPC Peering
- EIP – Just In Case
- NAT Instances
- Determining the NAT Instance Type
- IAM for Your Amazon VPC Infrastructure
- ELB on Amazon VPC
Overall, it’s a very handy quick list.
Serverlessconf 2016 – New York City: a personal report – is a fascinating read. Let me get you hooked:
This event left me with the impression (or the confirmation) that there are two paces and speeds at which people are moving.
There is the so called “legacy” pace. This is often characterized by the notion of VMs and virtualization. This market is typically on-prem, owned by VMware and where the majority of workloads (as of today) are running. Very steady.
The second “industry block” is the “new stuff” and this is a truly moving target. #Serverless is yet another model that we are seeing emerging in the last few years. We have moved from Cloud (i.e. IaaS) to opinionated PaaS, to un-opinionated PaaS, to DIY Containers, to CaaS (Containers as a Service) to now #Serverless. There is no way this is going to be the end of it as it’s a frenetic moving target and in every iteration more and more people will be left behind.
This time around was all about the DevOps people being “industry dinosaurs”. So if you are a DevOps persona, know you are legacy already.
Sometimes I feel like I am leaving on a different planet. All these people are so close, yet so far away …
So, it looks like I’m not the only one trying to figure out Amazon EC2 virtual CPU allocation. Slashdot runs the story (and a heated debate, as usual) on the subject of Amazon’s non-definitive virtual CPUs:
ECU’s were not the simplest approach to describing a virtual CPU, but they at least had a definition attached to them. Operations managers and those responsible for calculating server pricing could use that measure for comparison shopping. But ECUs were dropped as a visible and useful definition without announcement two years ago in favor of a descriptor — virtual CPU — that means, mainly, whatever AWS wants it to mean within a given instance family.
A precise number of ECUs in an instance has become simply a “virtual CPU.”
Yesterday I wrote the blog post, trying to figure out what is the CPU steal time and why it occurs. The problem with that post was that I didn’t go deep enough.
I was looking at this issue from the point of view of a generic virtual machine. The case that I had to deal with wasn’t exactly like that. I saw the CPU steal time on the Amazon EC2 instance. Assuming that these were just my neighbors acting up or Amazon having a temporary hardware issue was a wrong conclusion.
That’s because I didn’t know enough about Amazon EC2. Well, I’ve learned a bunch since then, so here’s what I found.
Continue reading “CPU Steal Time. Now on Amazon EC2”
The rumor of Microsoft working on its own Linux distribution has been going around for a while. Now it’s confirmed by Microsoft themselves:
The Azure Cloud Switch (ACS) is our foray into building our own software for running network devices like switches. It is a cross-platform modular operating system for data center networking built on Linux. ACS allows us to debug, fix, and test software bugs much faster. It also allows us the flexibility to scale down the software and develop features that are required for our datacenter and our networking needs.
The distribution is not for sale or download, but purely for use in their Azure cloud infrastructure. The Register looks at this in detail.
I guess, Mahatma Gandhi was right:
First they ignore you, then they laugh at you, then they fight you, then you win.
Here’s a little insight into the Amazon’s cloud computing infrastructure:
Amazon operates at least 30 data centers in its global network, with another 10 to 15 on the drawing board.
How big is a data center?
A key decision in planning and deploying cloud capacity is how large a data center to build. Amazon’s huge scale offers advantages in both cost and operations. Hamilton said most Amazon data centers house between 50,000 and 80,000 servers, with a power capacity of between 25 and 30 megawatts.
So, how many servers does the Amazon AWS run?
So how many servers does Amazon Web Services run? The descriptions by Hamilton and Vogels suggest the number is at least 1.5 million. Figuring out the upper end of the range is more difficult, but could range as high as 5.6 million, according to calculations by Timothy Prickett Morgan at the Platform.
Twilio – APIs for Text Messaging, VoIP & Voice in the Cloud.
On the way to work today I enjoyed an excellent episode of Software Engineering Radio which featured an interview with Eric Brewer, a VP of Infrastructure at Google, probably more famous for his CAP Theorem.
In theoretical computer science, the CAP theorem, also known as Brewer’s theorem, states that it is impossible for a distributed computer system to simultaneously provide all three of the following guarantees:
- Consistency (all nodes see the same data at the same time)
- Availability (a guarantee that every request receives a response about whether it succeeded or failed)
- Partition tolerance (the system continues to operate despite arbitrary message loss or failure of part of the system)
The discussion around “2 out of 3” was very thought-provoking and, at first, a little bit counter-intuitive. If you don’t want to listen to the show, read though this page, which covers the important bits.
The easiest way to understand CAP is to think of two nodes on opposite sides of a partition. Allowing at least one node to update state will cause the nodes to become inconsistent, thus forfeiting C. Likewise, if the choice is to preserve consistency, one side of the partition must act as if it is unavailable, thus forfeiting A. Only when nodes communicate is it possible to preserve both consistency and availability, thereby forfeiting P. The general belief is that for wide-area systems, designers cannot forfeit P and therefore have a difficult choice between C and A. In some sense, the NoSQL movement is about creating choices that focus on availability first and consistency second; databases that adhere to ACID properties (atomicity, consistency, isolation, and durability) do the opposite.
This puts some of the current trends into perspective.
SingleHop – a cloud-based hosting company – created this infographic on the cost of loss for when your backups aren’t up to the par. This should work well as a reminder, especially if printed out and hung on the wall in front of a sysadmin (but also somewhere, where the management can occasionally see it too).