Writing systemd Units

Vidar Hokstad explains what systemd units are and how to write them.  Very useful for that day when I will stop hating systemd and will try to embrace it.

Systemd has become the defacto new standard init for Linux-based systems. While not everyone has made the switch yet, pretty much all the major distros have made the decision to switch.

For most people this has not meant all that much yet, other than a lot of controversy. Systemd has built in SysV init system compatibility, and so it’s possible to avoid dealing with it quite well.

But there is much to be gained from picking up some basics. Systemd is very poweful.

I’m not going to deal with the basics of interacting with systemd as that’s well covered elsewhere. You can find a number of basic tips and tricks here.

Instead I want to talk about how to write systemd units.

Deprecated Linux networking commands and their replacements

Doug Vitale Tech Blog runs a post with a collection of the deprecated Linux networking commands and their replacements. Pretty handy if you want update some of your old bash scripts.

Deprecated command Replacement command(s)
arp ip n (ip neighbor)
ifconfig ip a (ip addr), ip link, ip -s (ip -stats)
iptunnel ip tunnel
iwconfig iw
nameif ip link, ifrename
netstat ss, ip route (for netstat-r), ip -s link (for netstat -i), ip maddr (for netstat-g)
route ip r (ip route)

Bashing up

Here are a couple of useful Bash resources that came upon my radar recently.

First one is Julia Evans’ blog post “Bash scripting quirks & safety tips“.  It’s quite introductory, but is has a few useful tips.  The one in particular I either didn’t know about or completely forgot  mentioned recently is on how to make Bash scripts safer by using “set -e“, “set -u“, and “set -o pipefail“.  These go well with another post of mine not so long ago.

The second is Sam Rowe’s blog post “Advancing in the Bash Shell“, which I found useful for all kinds of navigation and variable expansion in Bash command line.  Especially the bits on searching and reusing the history.

sshrc – bring your .bashrc, .vimrc, etc. with you when you ssh

sshrc looks like a handy tool, for those quick SSH sessions to machines, where you can’t setup your full environment for whatever reason (maybe a shared account or automated templating or restricted access).  Here’s a description from the project page:

sshrc works just like ssh, but it also sources the ~/.sshrc on your local computer after logging in remotely.

$ echo "echo welcome" >> ~/.sshrc
$ sshrc me@myserver
welcome

$ echo "alias ..='cd ..'" >> ~/.sshrc
$ sshrc me@myserver
$ type ..
.. is aliased to `cd ..'

You can use this to set environment variables, define functions, and run post-login commands. It’s that simple, and it won’t impact other users on the server – even if they use sshrc too. This makes sshrc very useful if you share a server with multiple users and can’t edit the server’s ~/.bashrc without affecting them, or if you have several servers that you don’t want to configure independently.

I’ve discovered it by accident when searching through packages in the Fedora repositories. So, yes, you can install it with yum/dnf.

asciinema – record and share your terminal sessions, the right way

asciinema is a tool to record terminal sessions and share them as videos.  But unlike many other tools that provide this functionality, ascinema does a very smart thing – instead of encoding the session into a video it interactively replays it in a text mode, which allows one to select and copy-paste commands and outputs from the playback.  The resulting “video” is also much lighter and faster than it would be if encoded into a video stream.

This is great for demos, tutorials, and other more technical scenarios.  The website also has a collection of recent and featured public screencasts.

pushd/popd vs. cd

My shell of choice and circumstance for most of my Linux life was Bash.  So, naturally, in my head, shell pretty much equals Bash, and I rarely think or get into situations when this is not true.  Recently, I was surprised by a script failure, which left me scratching my head.  The command that failed in the script was pushd.

pushd and popd, it turns out, are built into Bash, but they are not standard POSIX commands, so not all the shells have them.  My script wasn’t setting the shell explicitly, and end up executing with Dash, which I haven’t even heard of until this day.  The homepage of Dash says the following:

DASH is not Bash compatible, it’s the other way around.

Mkay… So, I’ve done two things:

  1. Set /bin/bash explicitly as my shell in the script.
  2. Switch to “cd folder && do something && cd –“, instead of pushd/popd combination, where possible.

I knew about “cd –” before, but it was interesting to learn if there are any particular differences (hint: there are) between the this approach and the pushd/popd one that I was using until now.  This StackOverflow thread (ok, ok, Unix StackExchange) was very helpful.

Bulletproof Bash : Stop script on error

The other day I’ve been puzzled by the results of a cron job script.  The bash script in question was written in a hurry a while back, and I was under the assumption that if any of its steps fail, the whole script will fail.  I was wrong.  Some commands were failing, but the script execution continued.  It was especially difficult to notice, due to a number of unset variables, piped commands, and redirected error output.

Once I realized the problem, I got even more puzzled as to what was the best solution.  Sure, you can check an exit code after each command in the script, but that didn’t seem elegant of efficient.

A quick couple of Google searches brought me to this StackOverflow thread (no surprise there), which opened my eyes on a few bash options that can be set at the beginning of the script to stop execution when an error or warning occurs (similar to use strict; use warnings; in Perl).  Here’s the test script for you with some test commands, pipes, error redirects, and options to control all that.

#!/bin/bash

# Stop on error
set -e
# Stop on unitialized variables
set -u
# Stop on failed pipes
set -o pipefail

# Good command
echo "We start here ..."

# Use of non-initialized variable
echo "$FOOBAR"
echo "Still going after uninitialized variable ..."

# Bad command with no STDERR
cd /foobar 2> /dev/null
echo "Still going after a bad command ..."

# Good command into a bad pipe with no STDERR
echo "Good" | /some/bad/script 2> /dev/null
echo "Still going after a bad pipe ..."

# Benchmark
echo "We should never get here!"

Save it to test.sh, make executable (chmod +x test.sh), and run like so:

$ ./test.sh || echo Something went wrong

Then try to comment out some options and some commands to see what happens in different scenarios.

I think, from now on, those three options will be the standard way I start all of my bash scripts.

 

CakePHP 3 : Remove Shell Welcome Header

CakePHP 3 has an excellent support for command line Shells, Tasks, and Console Tools.  There are a few that are bundled with the framework itself, and that come from a variety of plugins.  And, of course, you can have your own commands, specific to your application.

$ ./bin/cake

Welcome to CakePHP v3.4.3 Console
---------------------------------------------------------------
App : src
Path: /home/leonid/Work/cakephp_test/src/
PHP : 7.0.16
---------------------------------------------------------------
Current Paths:

* app:  src
* root: /home/leonid/Work/cakephp_test
* core: /home/leonid/Work/cakephp_test/vendor/cakephp/cakephp

Available Shells:

[Bake] bake

[DebugKit] benchmark, whitespace

[Migrations] migrations

[CORE] cache, i18n, orm_cache, plugin, routes, server

[app] console

To run an app or core command, type `cake shell_name [args]`
To run a plugin command, type `cake Plugin.shell_name [args]`
To get help on a specific command, type `cake shell_name --help`

There is one tiny little annoyance though.  Sometimes, it’s useful to get an output of the CakePHP Shell and use it in another script.  For example, you might need to get a list of all loaded plugins and loop over them, performing another action, outside of CakePHP.  Say, in a bash script.  Getting a list of loaded plugins is easy with the bundled shell like so:

$ ./bin/cake plugin loaded

Welcome to CakePHP v3.4.3 Console
---------------------------------------------------------------
App : src
Path: /home/leonid/Work/cakephp_test/src/
PHP : 7.0.16
---------------------------------------------------------------
Bake
DebugKit
Migrations

But, as you can see, the output is not very useful for machine processing. The welcome header is in the way.  Sure, you can parse it out with regular expressions, or even a simple line count.  But that lacks elegance.  Is there a better way?  I thought there was.

My first approach was to use the –quiet option, which, I thought, would leave me with just the needed output.  It turns out, that’s not what it does.  It strips out all the output, and there is no list of plugins at all.

The second approach worked out better.  I learned about it from this thread.  The solution is to extend the needed CakePHP shell and overwrite the protected _welcome() method.  Here’s the content of the newly created application level shell in src/Shell/PluginShell.php:

<?php
namespace App\Shell;

use Cake\Shell\PluginShell as Shell;

class PluginShell extends Shell
{
    /**
     * Silence the welcome message
     *
     * @return void
     */
    protected function _welcome()
    {
    }
}

And now running the same command as before produces a cleaner output:

$ ./bin/cake plugin loaded
Bake
DebugKit
Migrations

This now can be easily used in other scripts without any need for regular expressions and other trimming techniques.

awless – a Mighty CLI for AWS

awless is a command line interface to the Amazon AWS.  While Amazon AWS already has its own set of tools for command line interface, awless makes things even simpler, with the following features:

  • run frequent actions by using simple commands
  • easily explore your infrastructure and cloud resources inter relations via CLI
  • ensure smart defaults & security best practices
  • manage resources through robust runnable & scriptable templates (see awless templates)
  • explore, analyse and query your infrastructure offline
  • explore, analyse and query your infrastructure through time

How To Use Git to Manage your User Configuration Files

There is probably a gadzillion different ways that you can manage and synchronize you configuration files (aka dotfiles) between different Linux/UNIX boxes – anything from custom symlink scripts, all the way to configuration management tools like Puppet and Ansible.  Here are a few options to look at if you are not doing it already.

Personally, I’m using Ansible and I’m quite happy with it, as it allows me to have multiple playbooks (base configuration, desktop configuration, development setup, etc), and do more things than just manage my configuration files (install packages and tools that I often need, setup correct permissions, and more).

Recently, I came across this tutorial from Digital Ocean on how to manage your configuration files with git.  Again, there are a few options discussed in there, as even with git, there’s more than one way to do it (TMTOWTDI).

The one that I’ve heard about a long time ago, but completely forgot, and which I think is quite elegant is the approach of separating the working directory from the git repository:

Now, we do things a bit differently. We will start by specifying a different working directory using the core.worktree git configuration option:

git config core.worktree "../../"

What this does is establish the working directory relative to the path of the .git directory. The first ../refers to the ~/configs directory, and the second one points us one step beyond that to our home directory.

Basically, we’ve told git “keep the repository here, but the files you are managing are two levels above the repo”.

I guess, if you stick purely to git, you can offload some of the additional processing, such as permission changes and package installation, into one of the git hooks.  Something like post-checkout or post-merge.