I came across the second edition of the Prentice Hall’s “A Practical Guide to Linux Commands, Editors, and Shell Programming” by Mark G. Sobell (original link). This is a rather lengthy book at just over 1,000 pages, covering everything from history of Linux and basic commands, all the way to bash, Perl, and sed, and how things work both on the inside and outside.
It’s probably not one of those books to read from cover to cover, but quite handy to keep as a reference and flip a few pages once in a while.
WP-CLI is a super useful tool, which I use on a daily basis, and I wish more people knew about. Gladly, there is now “What Is WP-CLI? A Beginner’s Guide“, which explains what it is, how to install it, how to use it, and where to go from there.
Linux utils that you might not know covers a few Linux command line utilities that aren’t very famous:
- column, for “columnating” lists, which is very useful for display of table-like data (think CSV, for example);
- cal, for displaying calendars;
- factor, for calculating factors;
- numfmt, for formatting numbers and converting them to/from human-readable formats;
- shred, for overwriting the content of a deleted file, making it much more difficult to recover.
Vidar Hokstad explains what systemd units are and how to write them. Very useful for that day when I will stop hating systemd and will try to embrace it.
Systemd has become the defacto new standard init for Linux-based systems. While not everyone has made the switch yet, pretty much all the major distros have made the decision to switch.
For most people this has not meant all that much yet, other than a lot of controversy. Systemd has built in SysV init system compatibility, and so it’s possible to avoid dealing with it quite well.
But there is much to be gained from picking up some basics. Systemd is very poweful.
I’m not going to deal with the basics of interacting with systemd as that’s well covered elsewhere. You can find a number of basic tips and tricks here.
Instead I want to talk about how to write systemd units.
Doug Vitale Tech Blog runs a post with a collection of the deprecated Linux networking commands and their replacements. Pretty handy if you want update some of your old bash scripts.
||ip n (ip neighbor)
||ip a (ip addr), ip link, ip -s (ip -stats)
||ip link, ifrename
||ss, ip route (for netstat-r), ip -s link (for netstat -i), ip maddr (for netstat-g)
||ip r (ip route)
Here are a couple of useful Bash resources that came upon my radar recently.
First one is Julia Evans’ blog post “Bash scripting quirks & safety tips“. It’s quite introductory, but is has a few useful tips. The one in particular I
either didn’t know about or completely forgot mentioned recently is on how to make Bash scripts safer by using “set -e“, “set -u“, and “set -o pipefail“. These go well with another post of mine not so long ago.
The second is Sam Rowe’s blog post “Advancing in the Bash Shell“, which I found useful for all kinds of navigation and variable expansion in Bash command line. Especially the bits on searching and reusing the history.
sshrc looks like a handy tool, for those quick SSH sessions to machines, where you can’t setup your full environment for whatever reason (maybe a shared account or automated templating or restricted access). Here’s a description from the project page:
sshrc works just like ssh, but it also sources the ~/.sshrc on your local computer after logging in remotely.
$ echo "echo welcome" >> ~/.sshrc
$ sshrc me@myserver
$ echo "alias ..='cd ..'" >> ~/.sshrc
$ sshrc me@myserver
$ type ..
.. is aliased to `cd ..'
You can use this to set environment variables, define functions, and run post-login commands. It’s that simple, and it won’t impact other users on the server – even if they use sshrc too. This makes sshrc very useful if you share a server with multiple users and can’t edit the server’s ~/.bashrc without affecting them, or if you have several servers that you don’t want to configure independently.
I’ve discovered it by accident when searching through packages in the Fedora repositories. So, yes, you can install it with yum/dnf.
asciinema is a tool to record terminal sessions and share them as videos. But unlike many other tools that provide this functionality, ascinema does a very smart thing – instead of encoding the session into a video it interactively replays it in a text mode, which allows one to select and copy-paste commands and outputs from the playback. The resulting “video” is also much lighter and faster than it would be if encoded into a video stream.
This is great for demos, tutorials, and other more technical scenarios. The website also has a collection of recent and featured public screencasts.
My shell of choice and circumstance for most of my Linux life was Bash. So, naturally, in my head, shell pretty much equals Bash, and I rarely think or get into situations when this is not true. Recently, I was surprised by a script failure, which left me scratching my head. The command that failed in the script was pushd.
pushd and popd, it turns out, are built into Bash, but they are not standard POSIX commands, so not all the shells have them. My script wasn’t setting the shell explicitly, and end up executing with Dash, which I haven’t even heard of until this day. The homepage of Dash says the following:
DASH is not Bash compatible, it’s the other way around.
Mkay… So, I’ve done two things:
- Set /bin/bash explicitly as my shell in the script.
- Switch to “cd folder && do something && cd –“, instead of pushd/popd combination, where possible.
I knew about “cd –” before, but it was interesting to learn if there are any particular differences (hint: there are) between the this approach and the pushd/popd one that I was using until now. This StackOverflow thread (ok, ok, Unix StackExchange) was very helpful.
The other day I’ve been puzzled by the results of a cron job script. The bash script in question was written in a hurry a while back, and I was under the assumption that if any of its steps fail, the whole script will fail. I was wrong. Some commands were failing, but the script execution continued. It was especially difficult to notice, due to a number of unset variables, piped commands, and redirected error output.
Once I realized the problem, I got even more puzzled as to what was the best solution. Sure, you can check an exit code after each command in the script, but that didn’t seem elegant of efficient.
A quick couple of Google searches brought me to this StackOverflow thread (no surprise there), which opened my eyes on a few bash options that can be set at the beginning of the script to stop execution when an error or warning occurs (similar to use strict; use warnings; in Perl). Here’s the test script for you with some test commands, pipes, error redirects, and options to control all that.
# Stop on error
# Stop on unitialized variables
# Stop on failed pipes
set -o pipefail
# Good command
echo "We start here ..."
# Use of non-initialized variable
echo "Still going after uninitialized variable ..."
# Bad command with no STDERR
cd /foobar 2> /dev/null
echo "Still going after a bad command ..."
# Good command into a bad pipe with no STDERR
echo "Good" | /some/bad/script 2> /dev/null
echo "Still going after a bad pipe ..."
echo "We should never get here!"
Save it to test.sh, make executable (chmod +x test.sh), and run like so:
$ ./test.sh || echo Something went wrong
Then try to comment out some options and some commands to see what happens in different scenarios.
I think, from now on, those three options will be the standard way I start all of my bash scripts.