NAS Performance: NFS vs Samba vs GlusterFS

I came across this question and also found the results of the benchmarks somewhat surprising.

  • GlusterFS replicated 2: 32-35 seconds, high CPU load
  • GlusterFS single: 14-16 seconds, high CPU load
  • GlusterFS + NFS client: 16-19 seconds, high CPU load
  • NFS kernel server + NFS client (sync): 32-36 seconds, very low CPU load
  • NFS kernel server + NFS client (async): 3-4 seconds, very low CPU load
  • Samba: 4-7 seconds, medium CPU load
  • Direct disk: < 1 second

The post is from 2012, so I’m curious if this is still accurate. Has anybody tried this? Can confirm or otherwise?

Also, an interesting note from the answer to the above:

From what I’ve seen after a couple of packet captures, the SMB protocol can be chatty, but the latest version of Samba implements SMB2 which can both issue multiple commands with one packet, and issue multiple commands while waiting for an ACK from the last command to come back. This has vastly improved its speed, at least in my experience, and I know I was shocked the first time I saw the speed difference too – Troubleshooting Network Speeds — The Age Old Inquiry

 

How Far Can You Go With HAProxy and a t2.micro

Here’s an interesting set of experiments trying to answer the question of how far can you go with HAProxy setup on the smallest of the Amazon EC2 instances – t2.micro (1 virtual CPU, 1 GB of RAM).  Here’s the summary.

460 requests/second

At 460 req/second response times are mostly a flat ~300 ms, except for two spikes. I attribute this to TCP congestion avoidance as the traffic approaches the limit and packets start to get dropped. After dropped packets are detected the clients reduce their transmission rate, but eventually the transmission rate stabilizes again just under the limit. Only 1739 requests timeout and 134918 succeed.

[…]

It seems that the limit of the t2.micro is around 500 req/second even for small responses.

Fixing mistakes in Git

git

Linux.com reiterates over the ways to fix and undo mistakes using Git version control software.  Seasoned git users will probably know all of these already, but since I have to explain these things to git newcomers, I thought I’d have it handy somewhere here.

Screenshots from developers : 2002 vs. 2015

Here is a nice collection of screenshots (with some comments) from some really hardcore developers – people who are behind things like operating systems and programming languages, not the latest hipster startup that nobody will remember n three years.  Better even, the screenshots were taken in 2002 and now, 13 years later, reiterated.

desktop_bwk_2015 Two things I found interesting here:

  1. Pretty much everyone calls their setup “boring”, yet it’s obviously slow functional that very little changes over time.
  2. Some of these screenshots feature setups so basic, that for those people who are not too familiar with the applications used, it would be difficult to choose which screenshot is from 2002 and which one is from 2015.

And while I’m nowhere near that level of developer, I still have to say that my desktop hasn’t changed much in the last 13 years either.  I am spending my days in the MATE Desktop Environment, which is a fork of Gnome to maintain the awesome Gnome 2 interface and not all that craziness of Gnome 3.  And like many other people featured here, I mostly use the browser and a gadzillion of terminal windows for my work.  I also have Vim keybindings burnt into my fingers, and I can’t imagine switching to something else ever.  Here’s how it looks today.

desktop

I’m sure there must be a screenshot of my desktop from back in the days somewhere on this blog, but I don’t think I’ll find it.

CPU Steal Time

Here’s something that happens once in a blue moon – you get a server that seems overloaded while doing nothing.  There are several reasons for why that can happen, but today I’m only going to look at one of them.  As it happened to me very recently.

Firstly, if you have any kind of important infrastructure, make sure you have the monitoring tools in place.  Not just the notification kind, like Nagios, but also graphing ones like Zabbix and Munin.  This will help you plenty in times like this.

web1

When you have an issue to solve, you don’t want to be installing monitoring tools, and starting to gather your data.  You want the data to be there already.

Now, for the real thing.  What happened here?  Well, obviously the CPU steal time seems way off.  But what the hell is the CPU steal time?  Here’s a handy article – Understanding the CPU steal time.  And here is my favorite part of it:

There are two possible causes:

  1. You need a larger VM with more CPU resources (you are the problem).
  2. The physical server is over-sold and the virtual machines are aggressively competing for resources (you are not the problem).

The catch: you can’t tell which case your situation falls under by just watching the impacted instance’s CPU metrics.

In our case, it was a physical server issue, which we had no control over.  But it was super helpful to be able to say what is going.  We’ve prepared “plan B”, which was to move to another server, but finally the issue disappeared and we didn’t have to do that this time.

Oh, and if you don’t have those handy monitoring tools, you can use top:

top_steal

P.S. : If you are on Amazon EC2, you might find this article useful as well.