As someone who went through a whole pile of trying and error with Amazon AWS, I strongly recommend reading anything you can on the subject before you start moving your business to the cloud (not even necessarily Amazon, but any vendor), and while you have it running there. “The AWS spend of a SaaS side-business” is a good one in that category.
5 Fancy Reasons and 7 Funky Uses for the AWS CLI has a few good examples of AWS CLI usage:
- AWS CLI Multiple Profiles
- AWS CLI Autocomplete
- Formatting AWS CLI Output
- Filtering AWS CLI Output
- Using Waiters in the AWS CLI
- Using Input Files to Commands
- Using Roles to Access Resources
There also a few useful links in the article, so make sure you at least scroll through it.
I think I’m giving up on even knowing the list and purpose of all the Amazon AWS services, let alone how to use them. Here’s one I haven’t heard about until this very morning: AWS X-Ray.
AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components. You can use X-Ray to analyze both applications in development and in production, from simple three-tier applications to complex microservices applications consisting of thousands of services.
What is an AWS IAM Policy?
A set of rules that, under the correct
conditions, define what
principalor holder can take to specified AWS
That still sounds a bit stiff. How about:
Who can do what to which resources. When do we care?
There we go. Let’s break down the simple statement even more…
Compared to all the AWS documentation one has to dive through, this one is a giant time saver!
In “Why Configuration Management and Provisioning are Different” Carlos Nuñez advocates for the use of specialized infrastructure provisioning tools, like Terraform, Heat, and CloudFormation, instead of relying on the configuration management tools, like Ansible or Puppet.
I agree with his argument for the rollbacks, but not so much for the maintaining state and complexity. However I’m not yet comfortable to word my disagreement – my head is all over the place with clouds, and I’m still weak on the terminology.
The article is nice regardless, and made me look at the provisioning tools once again.
- run frequent actions by using simple commands
- easily explore your infrastructure and cloud resources inter relations via CLI
- ensure smart defaults & security best practices
- manage resources through robust runnable & scriptable templates (see
- explore, analyse and query your infrastructure offline
- explore, analyse and query your infrastructure through time
I came across this handy Amazon AWS manual for the maximum transfer unit (MTU) configuration for EC2 instances. This is not something one needs every day, but, I’m sure, when I need it, I’ll otherwise be spending hours trying to find it.
The maximum transmission unit (MTU) of a network connection is the size, in bytes, of the largest permissible packet that can be passed over the connection. The larger the MTU of a connection, the more data that can be passed in a single packet. Ethernet packets consist of the frame, or the actual data you are sending, and the network overhead information that surrounds it.
Ethernet frames can come in different formats, and the most common format is the standard Ethernet v2 frame format. It supports 1500 MTU, which is the largest Ethernet packet size supported over most of the Internet. The maximum supported MTU for an instance depends on its instance type. All Amazon EC2 instance types support 1500 MTU, and many current instance sizes support 9001 MTU, or jumbo frames.
The following instances support jumbo frames:
- Compute optimized: C3, C4, CC2
- General purpose: M3, M4, T2
- Accelerated computing: CG1, G2, P2
- Memory optimized: CR1, R3, R4, X1
- Storage optimized: D2, HI1, HS1, I2
As always, Julia Evans has got you covered on the basics of networking and the MTU.
Immutable infrastructure is a very powerful concept that brings stability, efficiency, and fidelity to your applications through automation and the use of successful patterns from programming. The general idea is that you never make changes to running infrastructure. Instead, you ensure that all infrastructure is created through automation, and to make a change, you simply create a new version of the infrastructure, and destroy the old one.
“Immutable Infrastructure with AWS and Ansible” is a, so far, three part article series (part 1, part 2, part 3), that shows how to use Ansible to achieve an immutable infrastructure on the Amazon Web Services cloud solution.
It covers everything starting from the basic setup of the workstation to execute Ansible playbooks and all the way to AWS security (users, roles, security groups), deployment of resources, and auto-scaling.
Yesterday I helped a friend to figure out why he couldn’t connect to his Amazon RDS database inside the Amazon VPC (Virtual Private Cloud). It was the second time someone asked me to help with the Amazon Web Services (AWS), and it was the first time I was actually helpful. Yey!
While I do use quite a few of the Amazon Web Services, I don’t have any experience with the Amazon RDS yet, as I’m managing my own MySQL instances. It was interesting to get my toes wet in the troubleshooting.
Here are a few things I’ve learned in the process.
Lesson #1: Amazon supports two different ways of accessing the RDS service. Make sure you know which one you are using and act accordingly.
If you run an Amazon RDS instance in the VPC, you’ll have to setup your networking and security access properly. This page – Connecting to a DB Instance Running the MySQL Database Engine – will only be useful once everything else is taken care of. It’s not your first and only manual to visit.
Lesson #2 (sort of obvious): Make sure that both your Network ACL and Security Groups allow all the necessary traffic in and out. Double-check the IP addresses in the rules. Make sure you are not using a proxy server, when looking up your external IP address on WhatIsMyIP.com or similar.
Lesson #3: Do not use ICMP traffic (ping and such) as a troubleshooting tool. It looks like Amazon RDS won’t be ping-able even if you allow it in your firewalls. Try with “telnet your-rds-end-point-server your-rds-end-point-port” (example: “telnet 18.104.22.168 3306” or with a real database client, like the command-line MySQL one.
Lesson #4: Make sure your routing is setup properly. Check that the subnet in which your RDS instance resides has the correct routing table attached to it, and that the routing table has the default gateway (0.0.0.0/0) route configured to either the Internet Gateway or to some sort of NAT. Chances are your subnet is only dealing with private IP range and has no way of sending traffic outside.
Lesson #5: When confused, disoriented, and stuck, assume it’s not Amazon’s fault. Keep calm and troubleshoot like any other remote connection issue. Double-check your assumptions.
There’s probably lesson 6 somewhere there, about contacting support or something along those lines. But in this particular case it didn’t get to that. Amazon AWS support is excellent though. I had to deal with those guys twice in the last two-something years, and they were awesome.