Linux utils that you might not know

Linux utils that you might not know covers a few Linux command line utilities that aren’t very famous:

  • column, for “columnating” lists, which is very useful for display of table-like data (think CSV, for example);
  • cal, for displaying calendars;
  • factor, for calculating factors;
  • numfmt, for formatting numbers and converting them to/from human-readable formats;
  • shred, for overwriting the content of a deleted file, making it much more difficult to recover.

Bashing up

Here are a couple of useful Bash resources that came upon my radar recently.

First one is Julia Evans’ blog post “Bash scripting quirks & safety tips“.  It’s quite introductory, but is has a few useful tips.  The one in particular I either didn’t know about or completely forgot  mentioned recently is on how to make Bash scripts safer by using “set -e“, “set -u“, and “set -o pipefail“.  These go well with another post of mine not so long ago.

The second is Sam Rowe’s blog post “Advancing in the Bash Shell“, which I found useful for all kinds of navigation and variable expansion in Bash command line.  Especially the bits on searching and reusing the history.

sshrc – bring your .bashrc, .vimrc, etc. with you when you ssh

sshrc looks like a handy tool, for those quick SSH sessions to machines, where you can’t setup your full environment for whatever reason (maybe a shared account or automated templating or restricted access).  Here’s a description from the project page:

sshrc works just like ssh, but it also sources the ~/.sshrc on your local computer after logging in remotely.

$ echo "echo welcome" >> ~/.sshrc
$ sshrc me@myserver
welcome

$ echo "alias ..='cd ..'" >> ~/.sshrc
$ sshrc me@myserver
$ type ..
.. is aliased to `cd ..'

You can use this to set environment variables, define functions, and run post-login commands. It’s that simple, and it won’t impact other users on the server – even if they use sshrc too. This makes sshrc very useful if you share a server with multiple users and can’t edit the server’s ~/.bashrc without affecting them, or if you have several servers that you don’t want to configure independently.

I’ve discovered it by accident when searching through packages in the Fedora repositories. So, yes, you can install it with yum/dnf.

pushd/popd vs. cd

My shell of choice and circumstance for most of my Linux life was Bash.  So, naturally, in my head, shell pretty much equals Bash, and I rarely think or get into situations when this is not true.  Recently, I was surprised by a script failure, which left me scratching my head.  The command that failed in the script was pushd.

pushd and popd, it turns out, are built into Bash, but they are not standard POSIX commands, so not all the shells have them.  My script wasn’t setting the shell explicitly, and end up executing with Dash, which I haven’t even heard of until this day.  The homepage of Dash says the following:

DASH is not Bash compatible, it’s the other way around.

Mkay… So, I’ve done two things:

  1. Set /bin/bash explicitly as my shell in the script.
  2. Switch to “cd folder && do something && cd –“, instead of pushd/popd combination, where possible.

I knew about “cd –” before, but it was interesting to learn if there are any particular differences (hint: there are) between the this approach and the pushd/popd one that I was using until now.  This StackOverflow thread (ok, ok, Unix StackExchange) was very helpful.

Bulletproof Bash : Stop script on error

The other day I’ve been puzzled by the results of a cron job script.  The bash script in question was written in a hurry a while back, and I was under the assumption that if any of its steps fail, the whole script will fail.  I was wrong.  Some commands were failing, but the script execution continued.  It was especially difficult to notice, due to a number of unset variables, piped commands, and redirected error output.

Once I realized the problem, I got even more puzzled as to what was the best solution.  Sure, you can check an exit code after each command in the script, but that didn’t seem elegant of efficient.

A quick couple of Google searches brought me to this StackOverflow thread (no surprise there), which opened my eyes on a few bash options that can be set at the beginning of the script to stop execution when an error or warning occurs (similar to use strict; use warnings; in Perl).  Here’s the test script for you with some test commands, pipes, error redirects, and options to control all that.

#!/bin/bash

# Stop on error
set -e
# Stop on unitialized variables
set -u
# Stop on failed pipes
set -o pipefail

# Good command
echo "We start here ..."

# Use of non-initialized variable
echo "$FOOBAR"
echo "Still going after uninitialized variable ..."

# Bad command with no STDERR
cd /foobar 2> /dev/null
echo "Still going after a bad command ..."

# Good command into a bad pipe with no STDERR
echo "Good" | /some/bad/script 2> /dev/null
echo "Still going after a bad pipe ..."

# Benchmark
echo "We should never get here!"

Save it to test.sh, make executable (chmod +x test.sh), and run like so:

$ ./test.sh || echo Something went wrong

Then try to comment out some options and some commands to see what happens in different scenarios.

I think, from now on, those three options will be the standard way I start all of my bash scripts.