Having knowledge of Linux is essential for any system administration, middleware, web engineer job.
Linux is used almost everywhere in production or a non-production environment. There are thousands of article, book, video training to explore and learn but that would be time-consuming.
Instead, you can follow one or two related books or online training.
The following learning materials cover a large number of Linux Administration tasks from beginning to expert level. So pick the one suits you.
I came across the second edition of the Prentice Hall’s “A Practical Guide to Linux Commands, Editors, and Shell Programming” by Mark G. Sobell (original link). This is a rather lengthy book at just over 1,000 pages, covering everything from history of Linux and basic commands, all the way to bash, Perl, and sed, and how things work both on the inside and outside.
It’s probably not one of those books to read from cover to cover, but quite handy to keep as a reference and flip a few pages once in a while.
Linux utils that you might not know covers a few Linux command line utilities that aren’t very famous:
- column, for “columnating” lists, which is very useful for display of table-like data (think CSV, for example);
- cal, for displaying calendars;
- factor, for calculating factors;
- numfmt, for formatting numbers and converting them to/from human-readable formats;
- shred, for overwriting the content of a deleted file, making it much more difficult to recover.
Here are a couple of useful Bash resources that came upon my radar recently.
First one is Julia Evans’ blog post “Bash scripting quirks & safety tips“. It’s quite introductory, but is has a few useful tips. The one in particular I
either didn’t know about or completely forgot mentioned recently is on how to make Bash scripts safer by using “set -e“, “set -u“, and “set -o pipefail“. These go well with another post of mine not so long ago.
The second is Sam Rowe’s blog post “Advancing in the Bash Shell“, which I found useful for all kinds of navigation and variable expansion in Bash command line. Especially the bits on searching and reusing the history.
sshrc looks like a handy tool, for those quick SSH sessions to machines, where you can’t setup your full environment for whatever reason (maybe a shared account or automated templating or restricted access). Here’s a description from the project page:
sshrc works just like ssh, but it also sources the ~/.sshrc on your local computer after logging in remotely.$ echo "echo welcome" >> ~/.sshrc $ sshrc me@myserver welcome $ echo "alias ..='cd ..'" >> ~/.sshrc $ sshrc me@myserver $ type .. .. is aliased to `cd ..'
You can use this to set environment variables, define functions, and run post-login commands. It’s that simple, and it won’t impact other users on the server – even if they use sshrc too. This makes sshrc very useful if you share a server with multiple users and can’t edit the server’s ~/.bashrc without affecting them, or if you have several servers that you don’t want to configure independently.
I’ve discovered it by accident when searching through packages in the Fedora repositories. So, yes, you can install it with yum/dnf.
My shell of choice and circumstance for most of my Linux life was Bash. So, naturally, in my head, shell pretty much equals Bash, and I rarely think or get into situations when this is not true. Recently, I was surprised by a script failure, which left me scratching my head. The command that failed in the script was pushd.
pushd and popd, it turns out, are built into Bash, but they are not standard POSIX commands, so not all the shells have them. My script wasn’t setting the shell explicitly, and end up executing with Dash, which I haven’t even heard of until this day. The homepage of Dash says the following:
DASH is not Bash compatible, it’s the other way around.
Mkay… So, I’ve done two things:
- Set /bin/bash explicitly as my shell in the script.
- Switch to “cd folder && do something && cd –“, instead of pushd/popd combination, where possible.
I knew about “cd –” before, but it was interesting to learn if there are any particular differences (hint: there are) between the this approach and the pushd/popd one that I was using until now. This StackOverflow thread (ok, ok, Unix StackExchange) was very helpful.
The other day I’ve been puzzled by the results of a cron job script. The bash script in question was written in a hurry a while back, and I was under the assumption that if any of its steps fail, the whole script will fail. I was wrong. Some commands were failing, but the script execution continued. It was especially difficult to notice, due to a number of unset variables, piped commands, and redirected error output.
Once I realized the problem, I got even more puzzled as to what was the best solution. Sure, you can check an exit code after each command in the script, but that didn’t seem elegant of efficient.
A quick couple of Google searches brought me to this StackOverflow thread (no surprise there), which opened my eyes on a few bash options that can be set at the beginning of the script to stop execution when an error or warning occurs (similar to use strict; use warnings; in Perl). Here’s the test script for you with some test commands, pipes, error redirects, and options to control all that.
#!/bin/bash # Stop on error set -e # Stop on unitialized variables set -u # Stop on failed pipes set -o pipefail # Good command echo "We start here ..." # Use of non-initialized variable echo "$FOOBAR" echo "Still going after uninitialized variable ..." # Bad command with no STDERR cd /foobar 2> /dev/null echo "Still going after a bad command ..." # Good command into a bad pipe with no STDERR echo "Good" | /some/bad/script 2> /dev/null echo "Still going after a bad pipe ..." # Benchmark echo "We should never get here!"
Save it to test.sh, make executable (chmod +x test.sh), and run like so:
$ ./test.sh || echo Something went wrong
Then try to comment out some options and some commands to see what happens in different scenarios.
I think, from now on, those three options will be the standard way I start all of my bash scripts.
If you write any Bash code that lasts more than a day, you should definitely read “Defensive BASH Programming” and follow the advice, if you haven’t already. It covers the following:
- Immutable global variables
- Everything is local
- Everything is a function
- Debugging functions
- Code clarity
- Each line does just one thing
- Printing usage
- Command line arguments
- Unit Testing
All that with code examples and explanation of importance.
Warning: you will lose a lot of sleep if you follow the link below. :)
No matter how well you know Vim, bash, git, and a whole slew of other command line tools, I promise you, you’ll find something new, something you had no idea existed, something that will help you save hours and hours of your life by shaving off a few seconds here and there on the tasks you perform on a daily basis, in the repositories link to from this site.
I think I’ve spent most of my Sunday there and my dotfiles are so different now that I’m not sure I should commit and push them all in one go. I think I might need to get used to the changes first.
Some of the things that I’ve found for myself:
- PHP Integration environment for Vim (spf13/PIV).
- myrepos – provides a
mrcommand, which is a tool to manage all your version control repositories.
- bash-it – a community Bash framework.
- Awesome dotfiles – a curated list of dotfiles resources.
… and a whole lot of snippets, tips, and tricks.
P.S.: Make sure you don’t spend too much time on these things though :)