asciinema – record and share your terminal sessions, the right way

asciinema is a tool to record terminal sessions and share them as videos.  But unlike many other tools that provide this functionality, ascinema does a very smart thing – instead of encoding the session into a video it interactively replays it in a text mode, which allows one to select and copy-paste commands and outputs from the playback.  The resulting “video” is also much lighter and faster than it would be if encoded into a video stream.

This is great for demos, tutorials, and other more technical scenarios.  The website also has a collection of recent and featured public screencasts.

pushd/popd vs. cd

My shell of choice and circumstance for most of my Linux life was Bash.  So, naturally, in my head, shell pretty much equals Bash, and I rarely think or get into situations when this is not true.  Recently, I was surprised by a script failure, which left me scratching my head.  The command that failed in the script was pushd.

pushd and popd, it turns out, are built into Bash, but they are not standard POSIX commands, so not all the shells have them.  My script wasn’t setting the shell explicitly, and end up executing with Dash, which I haven’t even heard of until this day.  The homepage of Dash says the following:

DASH is not Bash compatible, it’s the other way around.

Mkay… So, I’ve done two things:

  1. Set /bin/bash explicitly as my shell in the script.
  2. Switch to “cd folder && do something && cd –“, instead of pushd/popd combination, where possible.

I knew about “cd –” before, but it was interesting to learn if there are any particular differences (hint: there are) between the this approach and the pushd/popd one that I was using until now.  This StackOverflow thread (ok, ok, Unix StackExchange) was very helpful.

Bulletproof Bash : Stop script on error

The other day I’ve been puzzled by the results of a cron job script.  The bash script in question was written in a hurry a while back, and I was under the assumption that if any of its steps fail, the whole script will fail.  I was wrong.  Some commands were failing, but the script execution continued.  It was especially difficult to notice, due to a number of unset variables, piped commands, and redirected error output.

Once I realized the problem, I got even more puzzled as to what was the best solution.  Sure, you can check an exit code after each command in the script, but that didn’t seem elegant of efficient.

A quick couple of Google searches brought me to this StackOverflow thread (no surprise there), which opened my eyes on a few bash options that can be set at the beginning of the script to stop execution when an error or warning occurs (similar to use strict; use warnings; in Perl).  Here’s the test script for you with some test commands, pipes, error redirects, and options to control all that.

#!/bin/bash

# Stop on error
set -e
# Stop on unitialized variables
set -u
# Stop on failed pipes
set -o pipefail

# Good command
echo "We start here ..."

# Use of non-initialized variable
echo "$FOOBAR"
echo "Still going after uninitialized variable ..."

# Bad command with no STDERR
cd /foobar 2> /dev/null
echo "Still going after a bad command ..."

# Good command into a bad pipe with no STDERR
echo "Good" | /some/bad/script 2> /dev/null
echo "Still going after a bad pipe ..."

# Benchmark
echo "We should never get here!"

Save it to test.sh, make executable (chmod +x test.sh), and run like so:

$ ./test.sh || echo Something went wrong

Then try to comment out some options and some commands to see what happens in different scenarios.

I think, from now on, those three options will be the standard way I start all of my bash scripts.

 

How to monitor your Linux servers with nmon

How to monitor your Linux servers with nmon” article provides some details on how to use the comprehensive server monitoring tool “nmon” (Nigel’s Monitor) to keep an eye on your server or two.  If you have more than a handful of servers, you’d probably opt out for a full blown monitoring solution, like Zabbix, but even with that, nmon can be useful for quick troubleshooting, screenshots, and data collection.

I’ve heard of nmon before and even used it occasionally.  What I didn’t know was that it can collect system metrics into a file, which can then later be analyzed and graphed with the nmonchart tool.

That’s pretty handy.  The extra bonus is that these tools are available in most Linux distributions, so there is no need to download/compile/configure things.

 

Nginx Amplify : comprehensive Nginx monitoring

Somehow I missed the announcement of the Nginx Amplify (beta) back in November of last year, so here it goes now.

Nginx Amplify is a new tool for the comprehensive monitoring of Nginx web servers.  Here’s what it can do for you:

  • Visually identify performance bottlenecks, overloaded servers, or potential DDoS attacks
  • Improve and optimize NGINX performance with intelligent advice and recommendations
  • Get alerts when something is wrong with the delivery of your application
  • Plan capacity and performance for web applications
  • Keep track of systems running NGINX

As well

as the regular proactive monitoring of the Nginx issues.  Have a look at the documentation for more details.

awless – a Mighty CLI for AWS

awless is a command line interface to the Amazon AWS.  While Amazon AWS already has its own set of tools for command line interface, awless makes things even simpler, with the following features:

  • run frequent actions by using simple commands
  • easily explore your infrastructure and cloud resources inter relations via CLI
  • ensure smart defaults & security best practices
  • manage resources through robust runnable & scriptable templates (see awless templates)
  • explore, analyse and query your infrastructure offline
  • explore, analyse and query your infrastructure through time

SELinux Concepts – but for humans

SELinux has been an annoyance for me since the early days of Fedora and Red Hat bringing it into the distribution and enabling by default (see this blog post, for example, from 2004 about Fedora 3).

Over the years, I’ve tried to learn it, make it useful, and find benefits in using it, but somehow those were never enough and I keep falling back on the disabling it.  But on the other hand, my understanding of how SELinux works slowly is growing.  The video in this blog post helped a lot.

And now I’m glad to add another useful resource to the “SELinux for mere mortals” collection.  The blog mostly focuses on the terminology in the SELinux domain, and what means what.  It’s so simple and straight-forward, that it even uses examples of HTML and CSS – something I’ve never seen before.   If you are making your way through the “how the heck do I make sense of SELinux” land, check it out.  I’m sure it’ll help.

How To Use Git to Manage your User Configuration Files

There is probably a gadzillion different ways that you can manage and synchronize you configuration files (aka dotfiles) between different Linux/UNIX boxes – anything from custom symlink scripts, all the way to configuration management tools like Puppet and Ansible.  Here are a few options to look at if you are not doing it already.

Personally, I’m using Ansible and I’m quite happy with it, as it allows me to have multiple playbooks (base configuration, desktop configuration, development setup, etc), and do more things than just manage my configuration files (install packages and tools that I often need, setup correct permissions, and more).

Recently, I came across this tutorial from Digital Ocean on how to manage your configuration files with git.  Again, there are a few options discussed in there, as even with git, there’s more than one way to do it (TMTOWTDI).

The one that I’ve heard about a long time ago, but completely forgot, and which I think is quite elegant is the approach of separating the working directory from the git repository:

Now, we do things a bit differently. We will start by specifying a different working directory using the core.worktree git configuration option:

git config core.worktree "../../"

What this does is establish the working directory relative to the path of the .git directory. The first ../refers to the ~/configs directory, and the second one points us one step beyond that to our home directory.

Basically, we’ve told git “keep the repository here, but the files you are managing are two levels above the repo”.

I guess, if you stick purely to git, you can offload some of the additional processing, such as permission changes and package installation, into one of the git hooks.  Something like post-checkout or post-merge.

MySQL 8 is coming

OpenSource.com covers the upcoming release of the MySQL 8.

What happened to 6 & 7?

Years ago, before the Sun Microsystems purchase of MySQL AB, there was a version of MySQL with the number 6. Sadly, it was a bit ambitious and the change of ownership left it to wither. The MySQL Cluster product has been using the 7 series for years. With the new changes for MySQL 8, developers feel they have modified it enough to bump the big number.

The new version brings a whole lot of changes to filesystem organization, indexes, faster ALTER TABLE, and more.