Subbu Allamaraju says “Don’t Build Private Clouds“. I agree with his rational.
There are very few enterprises in the planet right now that need to own, operate and automate data centers. Unless you’ve at least 200,000 servers in multiple locations, or you’re in specific technology industries like communications, networking, media delivery, power, etc, you shouldn’t be in the data center and private cloud business. If you’re below this threshold, you should be spending most of your time and effort in getting out of the data center and not on automating and improving your on-premise data center footprint.
His main three points are:
- Private cloud makes you procrastinate doings the right things.
- Private cloud cost models are misleading.
- Don’t underestimate on-premise data center influence on your organization’s culture.
This article – “Using Ansible to Bootstrap My Work Environment Part 4” is pure gold for anyone trying to figure out all the moving parts needed to automate the provisioning and configuration of the Amazon EC2 instance with Ansible.
Sure, some bits are easier than the other, but it takes time to go from one step to another. In this article, you have everything you need, including the provisioning Ansible playbook and variables, cloud-init bits, and more.
I’ve printed and laminated my copy. It’s on the wall now. It will provide me with countless hours of joy during the upcoming Christmas season.
Cloud Academy Blog goes over top 13 Amazon VPC best practices – particularly good for those just starting up with the platform. The article discusses the following:
- Choosing the Proper VPC Configuration for Your Organization’s Needs
- Choosing a CIDR Block for Your VPC Implementation
- Isolating Your VPC Environments
- Securing Your Amazon VPC Implementation
- Creating Your Disaster Recovery Plan
- Traffic Control and Security
- Keep your Data Close
- VPC Peering
- EIP – Just In Case
- NAT Instances
- Determining the NAT Instance Type
- IAM for Your Amazon VPC Infrastructure
- ELB on Amazon VPC
Overall, it’s a very handy quick list.
Serverlessconf 2016 – New York City: a personal report – is a fascinating read. Let me get you hooked:
This event left me with the impression (or the confirmation) that there are two paces and speeds at which people are moving.
There is the so called “legacy” pace. This is often characterized by the notion of VMs and virtualization. This market is typically on-prem, owned by VMware and where the majority of workloads (as of today) are running. Very steady.
The second “industry block” is the “new stuff” and this is a truly moving target. #Serverless is yet another model that we are seeing emerging in the last few years. We have moved from Cloud (i.e. IaaS) to opinionated PaaS, to un-opinionated PaaS, to DIY Containers, to CaaS (Containers as a Service) to now #Serverless. There is no way this is going to be the end of it as it’s a frenetic moving target and in every iteration more and more people will be left behind.
This time around was all about the DevOps people being “industry dinosaurs”. So if you are a DevOps persona, know you are legacy already.
Sometimes I feel like I am leaving on a different planet. All these people are so close, yet so far away …
So, it looks like I’m not the only one trying to figure out Amazon EC2 virtual CPU allocation. Slashdot runs the story (and a heated debate, as usual) on the subject of Amazon’s non-definitive virtual CPUs:
ECU’s were not the simplest approach to describing a virtual CPU, but they at least had a definition attached to them. Operations managers and those responsible for calculating server pricing could use that measure for comparison shopping. But ECUs were dropped as a visible and useful definition without announcement two years ago in favor of a descriptor — virtual CPU — that means, mainly, whatever AWS wants it to mean within a given instance family.
A precise number of ECUs in an instance has become simply a “virtual CPU.”
Yesterday I wrote the blog post, trying to figure out what is the CPU steal time and why it occurs. The problem with that post was that I didn’t go deep enough.
I was looking at this issue from the point of view of a generic virtual machine. The case that I had to deal with wasn’t exactly like that. I saw the CPU steal time on the Amazon EC2 instance. Assuming that these were just my neighbors acting up or Amazon having a temporary hardware issue was a wrong conclusion.
That’s because I didn’t know enough about Amazon EC2. Well, I’ve learned a bunch since then, so here’s what I found.
Continue reading “CPU Steal Time. Now on Amazon EC2”
The rumor of Microsoft working on its own Linux distribution has been going around for a while. Now it’s confirmed by Microsoft themselves:
The Azure Cloud Switch (ACS) is our foray into building our own software for running network devices like switches. It is a cross-platform modular operating system for data center networking built on Linux. ACS allows us to debug, fix, and test software bugs much faster. It also allows us the flexibility to scale down the software and develop features that are required for our datacenter and our networking needs.
The distribution is not for sale or download, but purely for use in their Azure cloud infrastructure. The Register looks at this in detail.
I guess, Mahatma Gandhi was right:
First they ignore you, then they laugh at you, then they fight you, then you win.
Here’s a little insight into the Amazon’s cloud computing infrastructure:
Amazon operates at least 30 data centers in its global network, with another 10 to 15 on the drawing board.
How big is a data center?
A key decision in planning and deploying cloud capacity is how large a data center to build. Amazon’s huge scale offers advantages in both cost and operations. Hamilton said most Amazon data centers house between 50,000 and 80,000 servers, with a power capacity of between 25 and 30 megawatts.
So, how many servers does the Amazon AWS run?
So how many servers does Amazon Web Services run? The descriptions by Hamilton and Vogels suggest the number is at least 1.5 million. Figuring out the upper end of the range is more difficult, but could range as high as 5.6 million, according to calculations by Timothy Prickett Morgan at the Platform.
Twilio – APIs for Text Messaging, VoIP & Voice in the Cloud.