This year’s Jetpack annual report for this blog is ready – have a look. Here’s a teaser:
It’s been a busy year, so I haven’t been blogging as much as I wanted to, but overall, I think I did good (have a look at 2014 and 2013). Just to give you a quick comparison:
I blog mostly for myself, but it’s nice to see a slight grow in traffic. Although the fact that the most popular post in this blog throughout the years – how to check Squid proxy version – is a little concerning, yet funny. Well, at least people still find my “Vim for Perl developers” useful, even though it’s been more than 10 years since I wrote that (and probably five years since I promised to update it soon).
But as I said, I’m quite satisfied with my blogging this year. Hopefully I can continue to do the same in 2016.
“5 AWS mistakes you should avoid” is a rather opinionated piece on what you should and shouldn’t do with your infrastructure, especially, when using AWS. Here’s an example:
A typical web application consists of at least:
- load balancer
- scalable web backend
and looks like the following figure.
This pattern is very common and if yours look different you should have (strong) reasons.
It’s all good advice in there, but it comes from a very narrow perspective. The “mistakes” are:
- managing infrastructure manually
- not using Auto Scaling Groups
- not analyzing metrics in CloudWatch
- ignoring Trusted Advisor
- underutilizing virtual machines
So, it looks like I’m not the only one trying to figure out Amazon EC2 virtual CPU allocation. Slashdot runs the story (and a heated debate, as usual) on the subject of Amazon’s non-definitive virtual CPUs:
ECU’s were not the simplest approach to describing a virtual CPU, but they at least had a definition attached to them. Operations managers and those responsible for calculating server pricing could use that measure for comparison shopping. But ECUs were dropped as a visible and useful definition without announcement two years ago in favor of a descriptor — virtual CPU — that means, mainly, whatever AWS wants it to mean within a given instance family.
A precise number of ECUs in an instance has become simply a “virtual CPU.”
If you thought t2.micro was a tiny machine, I have news for you – Amazon announced t2.nano instance type. It features 512 MB of RAM, 1 vCPU, and up to two Elastic network interfaces. Price for on-demand instance – $0.0065 per hour.
This instance type is perfect for small websites, developer and testing environments, and other tasks which don’t require a lot of resource.
About a month ago, GitHub revealed its redesigned interface. It gets better and better with every iteration. But this time also got a feeling of deja vu, whic took me a while to figure out. And finally I did. The navigation menu went from right side to the top. And it’s not the first time it’s there.
Here is a link to the Refactoring GitHub’s Design blog post (I linked to it before), which explains some of the design decisions and the menu on the right. Among other things, there’s a screenshot of how things used to be before. Have a look.
It’s not identical, but it’s pretty close.
Yesterday I wrote the blog post, trying to figure out what is the CPU steal time and why it occurs. The problem with that post was that I didn’t go deep enough.
I was looking at this issue from the point of view of a generic virtual machine. The case that I had to deal with wasn’t exactly like that. I saw the CPU steal time on the Amazon EC2 instance. Assuming that these were just my neighbors acting up or Amazon having a temporary hardware issue was a wrong conclusion.
That’s because I didn’t know enough about Amazon EC2. Well, I’ve learned a bunch since then, so here’s what I found.
Continue reading “CPU Steal Time. Now on Amazon EC2”
Here is an interesting web design idea that adds uniqueness to the website : use a random font for post titles, and use random color schemes for each post. To hell with consistency you say? Well, apparently, being random is being consistent too.
Picked up the thought from this blog post.
Linux.com reiterates over the ways to fix and undo mistakes using Git version control software. Seasoned git users will probably know all of these already, but since I have to explain these things to git newcomers, I thought I’d have it handy somewhere here.
Here is a nice collection of screenshots (with some comments) from some really hardcore developers – people who are behind things like operating systems and programming languages, not the latest hipster startup that nobody will remember n three years. Better even, the screenshots were taken in 2002 and now, 13 years later, reiterated.
Two things I found interesting here:
- Pretty much everyone calls their setup “boring”, yet it’s obviously slow functional that very little changes over time.
- Some of these screenshots feature setups so basic, that for those people who are not too familiar with the applications used, it would be difficult to choose which screenshot is from 2002 and which one is from 2015.
And while I’m nowhere near that level of developer, I still have to say that my desktop hasn’t changed much in the last 13 years either. I am spending my days in the MATE Desktop Environment, which is a fork of Gnome to maintain the awesome Gnome 2 interface and not all that craziness of Gnome 3. And like many other people featured here, I mostly use the browser and a gadzillion of terminal windows for my work. I also have Vim keybindings burnt into my fingers, and I can’t imagine switching to something else ever. Here’s how it looks today.
I’m sure there must be a screenshot of my desktop from back in the days somewhere on this blog, but I don’t think I’ll find it.
Here’s something that happens once in a blue moon – you get a server that seems overloaded while doing nothing. There are several reasons for why that can happen, but today I’m only going to look at one of them. As it happened to me very recently.
Firstly, if you have any kind of important infrastructure, make sure you have the monitoring tools in place. Not just the notification kind, like Nagios, but also graphing ones like Zabbix and Munin. This will help you plenty in times like this.
When you have an issue to solve, you don’t want to be installing monitoring tools, and starting to gather your data. You want the data to be there already.
Now, for the real thing. What happened here? Well, obviously the CPU steal time seems way off. But what the hell is the CPU steal time? Here’s a handy article – Understanding the CPU steal time. And here is my favorite part of it:
There are two possible causes:
- You need a larger VM with more CPU resources (you are the problem).
- The physical server is over-sold and the virtual machines are aggressively competing for resources (you are not the problem).
The catch: you can’t tell which case your situation falls under by just watching the impacted instance’s CPU metrics.
In our case, it was a physical server issue, which we had no control over. But it was super helpful to be able to say what is going. We’ve prepared “plan B”, which was to move to another server, but finally the issue disappeared and we didn’t have to do that this time.
Oh, and if you don’t have those handy monitoring tools, you can use top:
P.S. : If you are on Amazon EC2, you might find this article useful as well.