- run frequent actions by using simple commands
- easily explore your infrastructure and cloud resources inter relations via CLI
- ensure smart defaults & security best practices
- manage resources through robust runnable & scriptable templates (see
- explore, analyse and query your infrastructure offline
- explore, analyse and query your infrastructure through time
I came across this handy Amazon AWS manual for the maximum transfer unit (MTU) configuration for EC2 instances. This is not something one needs every day, but, I’m sure, when I need it, I’ll otherwise be spending hours trying to find it.
The maximum transmission unit (MTU) of a network connection is the size, in bytes, of the largest permissible packet that can be passed over the connection. The larger the MTU of a connection, the more data that can be passed in a single packet. Ethernet packets consist of the frame, or the actual data you are sending, and the network overhead information that surrounds it.
Ethernet frames can come in different formats, and the most common format is the standard Ethernet v2 frame format. It supports 1500 MTU, which is the largest Ethernet packet size supported over most of the Internet. The maximum supported MTU for an instance depends on its instance type. All Amazon EC2 instance types support 1500 MTU, and many current instance sizes support 9001 MTU, or jumbo frames.
The following instances support jumbo frames:
- Compute optimized: C3, C4, CC2
- General purpose: M3, M4, T2
- Accelerated computing: CG1, G2, P2
- Memory optimized: CR1, R3, R4, X1
- Storage optimized: D2, HI1, HS1, I2
As always, Julia Evans has got you covered on the basics of networking and the MTU.
As I am reading this story – GitLab.com melts down after wrong directory deleted, backups fail and these details – every single hair I have, moves … I don’t (and didn’t) have any data on GitLab, so I haven’t lost anything. But as somebody who worked as a system administrator (and backup administrator) for years, I can imagine the physical and psychological state of the team all too well.
Sure, things could have been done better. But it’s easier said than done. Modern technology is very complex. And it changes fast. And businesses want to move fast too. And the proper resources (time, money, people) are not always allocated for mission critical tasks. One thing is for sure, the responsibility lies on a whole bunch of people for a whole bunch of decisions. But the hardest job is right now upon the tech people to bring back whatever they can. There’s no sleep. Probably no food. No fun. And a tremendous pressure all around.
I wish the guys and gals at GitLab a super good luck. Hopefully they will find a snapshot to restore from and this whole thing will calm down and sort itself out. Stay strong!
And I guess I’ll be doing test restores all night today, making sure that all my things are covered…
Update: you can now read the full post-mortem as well.
Subbu Allamaraju says “Don’t Build Private Clouds“. I agree with his rational.
There are very few enterprises in the planet right now that need to own, operate and automate data centers. Unless you’ve at least 200,000 servers in multiple locations, or you’re in specific technology industries like communications, networking, media delivery, power, etc, you shouldn’t be in the data center and private cloud business. If you’re below this threshold, you should be spending most of your time and effort in getting out of the data center and not on automating and improving your on-premise data center footprint.
His main three points are:
- Private cloud makes you procrastinate doings the right things.
- Private cloud cost models are misleading.
- Don’t underestimate on-premise data center influence on your organization’s culture.
This article – “Using Ansible to Bootstrap My Work Environment Part 4” is pure gold for anyone trying to figure out all the moving parts needed to automate the provisioning and configuration of the Amazon EC2 instance with Ansible.
Sure, some bits are easier than the other, but it takes time to go from one step to another. In this article, you have everything you need, including the provisioning Ansible playbook and variables, cloud-init bits, and more.
I’ve printed and laminated my copy. It’s on the wall now. It will provide me with countless hours of joy during the upcoming Christmas season.
- Choosing the Proper VPC Configuration for Your Organization’s Needs
- Choosing a CIDR Block for Your VPC Implementation
- Isolating Your VPC Environments
- Securing Your Amazon VPC Implementation
- Creating Your Disaster Recovery Plan
- Traffic Control and Security
- Keep your Data Close
- VPC Peering
- EIP – Just In Case
- NAT Instances
- Determining the NAT Instance Type
- IAM for Your Amazon VPC Infrastructure
- ELB on Amazon VPC
Overall, it’s a very handy quick list.
Serverlessconf 2016 – New York City: a personal report – is a fascinating read. Let me get you hooked:
This event left me with the impression (or the confirmation) that there are two paces and speeds at which people are moving.
There is the so called “legacy” pace. This is often characterized by the notion of VMs and virtualization. This market is typically on-prem, owned by VMware and where the majority of workloads (as of today) are running. Very steady.
The second “industry block” is the “new stuff” and this is a truly moving target. #Serverless is yet another model that we are seeing emerging in the last few years. We have moved from Cloud (i.e. IaaS) to opinionated PaaS, to un-opinionated PaaS, to DIY Containers, to CaaS (Containers as a Service) to now #Serverless. There is no way this is going to be the end of it as it’s a frenetic moving target and in every iteration more and more people will be left behind.
This time around was all about the DevOps people being “industry dinosaurs”. So if you are a DevOps persona, know you are legacy already.
Sometimes I feel like I am leaving on a different planet. All these people are so close, yet so far away …
So, it looks like I’m not the only one trying to figure out Amazon EC2 virtual CPU allocation. Slashdot runs the story (and a heated debate, as usual) on the subject of Amazon’s non-definitive virtual CPUs:
ECU’s were not the simplest approach to describing a virtual CPU, but they at least had a definition attached to them. Operations managers and those responsible for calculating server pricing could use that measure for comparison shopping. But ECUs were dropped as a visible and useful definition without announcement two years ago in favor of a descriptor — virtual CPU — that means, mainly, whatever AWS wants it to mean within a given instance family.
A precise number of ECUs in an instance has become simply a “virtual CPU.”
Yesterday I wrote the blog post, trying to figure out what is the CPU steal time and why it occurs. The problem with that post was that I didn’t go deep enough.
I was looking at this issue from the point of view of a generic virtual machine. The case that I had to deal with wasn’t exactly like that. I saw the CPU steal time on the Amazon EC2 instance. Assuming that these were just my neighbors acting up or Amazon having a temporary hardware issue was a wrong conclusion.
That’s because I didn’t know enough about Amazon EC2. Well, I’ve learned a bunch since then, so here’s what I found.
Criticism-as-a service: for a fixed fee our team of self-proclaimed experts will criticise your idea/startup/blog post.
— Marc Gear (@marcgear) November 23, 2015