As someone who went through a whole pile of trying and error with Amazon AWS, I strongly recommend reading anything you can on the subject before you start moving your business to the cloud (not even necessarily Amazon, but any vendor), and while you have it running there. “The AWS spend of a SaaS side-business” is a good one in that category.
5 Fancy Reasons and 7 Funky Uses for the AWS CLI has a few good examples of AWS CLI usage:
- AWS CLI Multiple Profiles
- AWS CLI Autocomplete
- Formatting AWS CLI Output
- Filtering AWS CLI Output
- Using Waiters in the AWS CLI
- Using Input Files to Commands
- Using Roles to Access Resources
There also a few useful links in the article, so make sure you at least scroll through it.
I think I’m giving up on even knowing the list and purpose of all the Amazon AWS services, let alone how to use them. Here’s one I haven’t heard about until this very morning: AWS X-Ray.
AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components. You can use X-Ray to analyze both applications in development and in production, from simple three-tier applications to complex microservices applications consisting of thousands of services.
“What Comes After SaaS?” is a collection of some interesting thoughts on the evolution of the software industry, its current position and issues, and what’s coming next.
Here are a few bits to get you started:
[T]he easy availability and mass adoption of cloud-based (SaaS) technology makes advanced software systems so much easier/cheaper/faster to build that “value” is rapidly bleeding out of the software stack. Yes, software is eating the world, but software’s very ubiquity is starting to threaten the ability to extract value from software. In other words, the ability to write and deploy code is no longer a core value driver.
In an era of cloud and open source, deep technology attacking hard problems is becoming a shallower moat. The use of open source is making it harder to monetize technology advances while the use of cloud to deliver technology is moving defensibility to different parts of the product. Companies that focus too much on technology without putting it in context of a customer problem will be caught between a rock and a hard place — or as I like to say, “between open source and a cloud place.”
And here’s the best part, talking about Cloud 3.0:
The next chapter of Cloud software will lead to an explosion of new vendors and offerings. But they won’t quite look the same as before — expect lots of point solutions (run by small teams or even individuals) and software as a delivery for more elaborate (e.g. human-in-the-loop) service.
This new way of doing business is still developing rapidly. But here’s a spotting guide to identify this new breed of company in the wild:
Cloud 3.0 Company Differentiators:
- Connect from anywhere: one click auth, integrations with all major platforms with relevant data sources to power the tool
- Open platform: complete developer APIs and export functionality — and maybe even storing core data in one or more other vendors’ systems
- Programmatic use: many happy customers may only ever interact programmatically — no more interfaces, dashboards or logins to remember. Just value and connectivity.
- Clear core value: most companies seem to fit in one or more of the categories below:
One or More Core Value
- I: Best-in-Class Point Solution (e.g. Lead Scoring)
- II: Connectivity Platform — the integrations are the product (e.g. Segment, mParticle, Zapier )
- III: Solution Ecosystem — the core value of product might actually be other developers who happen to deployer or deliver their value through this product’s pipes.
Interestingly, Salesforce — who brought us “The Cloud” — may even be the first major window to what next generation companies look like. After all, one could argue the value to a SMB choosing Salesforce (instead of the many ways to manage sales contacts) has become:
- A standardized schema for CRM data
- Easy integrations with hundreds of other point solutions
- A pool of independent contractors with familiarity of the problem space
We could imagine even faster innovation if only there were a way to establish trust with many remote vendors and workers, each offering the very best point solution in the world. 10 Million “companies” powered by the very best person in the world at their solution. Sounds a little bit like Ethereum and the token-based economy…
What is an AWS IAM Policy?
A set of rules that, under the correct
conditions, define what
principalor holder can take to specified AWS
That still sounds a bit stiff. How about:
Who can do what to which resources. When do we care?
There we go. Let’s break down the simple statement even more…
Compared to all the AWS documentation one has to dive through, this one is a giant time saver!
In “Why Configuration Management and Provisioning are Different” Carlos Nuñez advocates for the use of specialized infrastructure provisioning tools, like Terraform, Heat, and CloudFormation, instead of relying on the configuration management tools, like Ansible or Puppet.
I agree with his argument for the rollbacks, but not so much for the maintaining state and complexity. However I’m not yet comfortable to word my disagreement – my head is all over the place with clouds, and I’m still weak on the terminology.
The article is nice regardless, and made me look at the provisioning tools once again.
- run frequent actions by using simple commands
- easily explore your infrastructure and cloud resources inter relations via CLI
- ensure smart defaults & security best practices
- manage resources through robust runnable & scriptable templates (see
- explore, analyse and query your infrastructure offline
- explore, analyse and query your infrastructure through time
I came across this handy Amazon AWS manual for the maximum transfer unit (MTU) configuration for EC2 instances. This is not something one needs every day, but, I’m sure, when I need it, I’ll otherwise be spending hours trying to find it.
The maximum transmission unit (MTU) of a network connection is the size, in bytes, of the largest permissible packet that can be passed over the connection. The larger the MTU of a connection, the more data that can be passed in a single packet. Ethernet packets consist of the frame, or the actual data you are sending, and the network overhead information that surrounds it.
Ethernet frames can come in different formats, and the most common format is the standard Ethernet v2 frame format. It supports 1500 MTU, which is the largest Ethernet packet size supported over most of the Internet. The maximum supported MTU for an instance depends on its instance type. All Amazon EC2 instance types support 1500 MTU, and many current instance sizes support 9001 MTU, or jumbo frames.
The following instances support jumbo frames:
- Compute optimized: C3, C4, CC2
- General purpose: M3, M4, T2
- Accelerated computing: CG1, G2, P2
- Memory optimized: CR1, R3, R4, X1
- Storage optimized: D2, HI1, HS1, I2
As always, Julia Evans has got you covered on the basics of networking and the MTU.
As I am reading this story – GitLab.com melts down after wrong directory deleted, backups fail and these details – every single hair I have, moves … I don’t (and didn’t) have any data on GitLab, so I haven’t lost anything. But as somebody who worked as a system administrator (and backup administrator) for years, I can imagine the physical and psychological state of the team all too well.
Sure, things could have been done better. But it’s easier said than done. Modern technology is very complex. And it changes fast. And businesses want to move fast too. And the proper resources (time, money, people) are not always allocated for mission critical tasks. One thing is for sure, the responsibility lies on a whole bunch of people for a whole bunch of decisions. But the hardest job is right now upon the tech people to bring back whatever they can. There’s no sleep. Probably no food. No fun. And a tremendous pressure all around.
I wish the guys and gals at GitLab a super good luck. Hopefully they will find a snapshot to restore from and this whole thing will calm down and sort itself out. Stay strong!
And I guess I’ll be doing test restores all night today, making sure that all my things are covered…
Update: you can now read the full post-mortem as well.
Subbu Allamaraju says “Don’t Build Private Clouds“. I agree with his rational.
There are very few enterprises in the planet right now that need to own, operate and automate data centers. Unless you’ve at least 200,000 servers in multiple locations, or you’re in specific technology industries like communications, networking, media delivery, power, etc, you shouldn’t be in the data center and private cloud business. If you’re below this threshold, you should be spending most of your time and effort in getting out of the data center and not on automating and improving your on-premise data center footprint.
His main three points are:
- Private cloud makes you procrastinate doings the right things.
- Private cloud cost models are misleading.
- Don’t underestimate on-premise data center influence on your organization’s culture.