Thoughts on technology, movies, and everything else
System administration is a special are of IT. It also has a special place in my heart. It is an interesting mixture of all the other disciplines, both common across the whole industry, and at the same time unique for each person, company, and geographical location. When I have something to say or share about system administration, I use this category.
My shell of choice and circumstance for most of my Linux life was Bash. So, naturally, in my head, shell pretty much equals Bash, and I rarely think or get into situations when this is not true. Recently, I was surprised by a script failure, which left me scratching my head. The command that failed in the script was pushd.
pushd and popd, it turns out, are built into Bash, but they are not standard POSIX commands, so not all the shells have them. My script wasn’t setting the shell explicitly, and end up executing with Dash, which I haven’t even heard of until this day. The homepage of Dash says the following:
DASH is not Bash compatible, it’s the other way around.
Mkay… So, I’ve done two things:
Set /bin/bash explicitly as my shell in the script.
Switch to “cd folder && do something && cd –“, instead of pushd/popd combination, where possible.
I knew about “cd –” before, but it was interesting to learn if there are any particular differences (hint: there are) between the this approach and the pushd/popd one that I was using until now. This StackOverflow thread (ok, ok, Unix StackExchange) was very helpful.
The other day I’ve been puzzled by the results of a cron job script. The bash script in question was written in a hurry a while back, and I was under the assumption that if any of its steps fail, the whole script will fail. I was wrong. Some commands were failing, but the script execution continued. It was especially difficult to notice, due to a number of unset variables, piped commands, and redirected error output.
Once I realized the problem, I got even more puzzled as to what was the best solution. Sure, you can check an exit code after each command in the script, but that didn’t seem elegant of efficient.
A quick couple of Google searches brought me to this StackOverflow thread (no surprise there), which opened my eyes on a few bash options that can be set at the beginning of the script to stop execution when an error or warning occurs (similar to use strict; use warnings; in Perl). Here’s the test script for you with some test commands, pipes, error redirects, and options to control all that.
# Stop on error
# Stop on unitialized variables
# Stop on failed pipes
set -o pipefail
# Good command
echo "We start here ..."
# Use of non-initialized variable
echo "Still going after uninitialized variable ..."
# Bad command with no STDERR
cd /foobar 2> /dev/null
echo "Still going after a bad command ..."
# Good command into a bad pipe with no STDERR
echo "Good" | /some/bad/script 2> /dev/null
echo "Still going after a bad pipe ..."
echo "We should never get here!"
Save it to test.sh, make executable (chmod +x test.sh), and run like so:
$ ./test.sh || echo Something went wrong
Then try to comment out some options and some commands to see what happens in different scenarios.
I think, from now on, those three options will be the standard way I start all of my bash scripts.
“How to monitor your Linux servers with nmon” article provides some details on how to use the comprehensive server monitoring tool “nmon” (Nigel’s Monitor) to keep an eye on your server or two. If you have more than a handful of servers, you’d probably opt out for a full blown monitoring solution, like Zabbix, but even with that, nmon can be useful for quick troubleshooting, screenshots, and data collection.
I’ve heard of nmon before and even used it occasionally. What I didn’t know was that it can collect system metrics into a file, which can then later be analyzed and graphed with the nmonchart tool.
That’s pretty handy. The extra bonus is that these tools are available in most Linux distributions, so there is no need to download/compile/configure things.
SELinux has been an annoyance for me since the early days of Fedora and Red Hat bringing it into the distribution and enabling by default (see this blog post, for example, from 2004 about Fedora 3).
Over the years, I’ve tried to learn it, make it useful, and find benefits in using it, but somehow those were never enough and I keep falling back on the disabling it. But on the other hand, my understanding of how SELinux works slowly is growing. The video in this blog post helped a lot.
And now I’m glad to add another useful resource to the “SELinux for mere mortals” collection. The blog mostly focuses on the terminology in the SELinux domain, and what means what. It’s so simple and straight-forward, that it even uses examples of HTML and CSS – something I’ve never seen before. If you are making your way through the “how the heck do I make sense of SELinux” land, check it out. I’m sure it’ll help.
There is probably a gadzillion different ways that you can manage and synchronize you configuration files (aka dotfiles) between different Linux/UNIX boxes – anything from custom symlink scripts, all the way to configuration management tools like Puppet and Ansible. Here are a few options to look at if you are not doing it already.
Personally, I’m using Ansible and I’m quite happy with it, as it allows me to have multiple playbooks (base configuration, desktop configuration, development setup, etc), and do more things than just manage my configuration files (install packages and tools that I often need, setup correct permissions, and more).
Recently, I came across this tutorial from Digital Ocean on how to manage your configuration files with git. Again, there are a few options discussed in there, as even with git, there’s more than one way to do it (TMTOWTDI).
The one that I’ve heard about a long time ago, but completely forgot, and which I think is quite elegant is the approach of separating the working directory from the git repository:
Now, we do things a bit differently. We will start by specifying a different working directory using the core.worktree git configuration option:
git config core.worktree "../../"
What this does is establish the working directory relative to the path of the .git directory. The first ../refers to the ~/configs directory, and the second one points us one step beyond that to our home directory.
Basically, we’ve told git “keep the repository here, but the files you are managing are two levels above the repo”.
I guess, if you stick purely to git, you can offload some of the additional processing, such as permission changes and package installation, into one of the git hooks. Something like post-checkout or post-merge.
Terminals are sexy is a curated list of Terminal frameworks, plugins & resources for CLI lovers. There is plenty of links to applications, plugins and configurations. For me personally, the most useful one was the link to sensible Bash configuration.
Kevin Schroeder has a blog post about the tool that he is building for configuration management in PHP. The library is still in the early pre-release stage, but it looks like it solves quite a few problems related to configuration, like nesting, inheritance, and environment/context variation.
Here’s the YouTube video that provides a bit of introduction into how to use the tool, and what to expect of it.
The only thing that dials down my excitement in this implementation is the use of XML, even though I understand why he opted for this choice.
I will need a PHP configuration management solution soon, but the priority hasn’t been raised high enough yet for me to jump into the research. If you know of any other similar tools, please let me know – it all will come handy pretty soon.
Years ago, before the Sun Microsystems purchase of MySQL AB, there was a version of MySQL with the number 6. Sadly, it was a bit ambitious and the change of ownership left it to wither. The MySQL Cluster product has been using the 7 series for years. With the new changes for MySQL 8, developers feel they have modified it enough to bump the big number.
The new version brings a whole lot of changes to filesystem organization, indexes, faster ALTER TABLE, and more.