Troubleshooting with /dev/tcp and /dev/udp

Imagine you are on a freshly installed Linux machine with the minimal set of packages, and you need to test network connectivity.  You don’t have netcat, telnet, and your other usual tools.  For the sake of the example, imagine that even curl and wget are missing.  What do you do?

Well, apparently, there is a way to do this with plain old bash.  A way, which I didn’t know until today.  You can do this with /dev/tcp and /dev/udp. Here is an example verbatim from the Advanced Bash-Scripting Guide:

# /dev/tcp redirection to check Internet connection.

# Script by Troy Engel.
# Used with permission.       # A known spam-friendly ISP.
TCP_PORT=80                # Port 80 is http.
# Try to connect. (Somewhat similar to a 'ping' . . .) 
echo "HEAD / HTTP/1.0" >/dev/tcp/${TCP_HOST}/${TCP_PORT}

: <<EXPLANATION If bash was compiled with --enable-net-redirections, it has the capability of using a special character device for both TCP and UDP redirections. These redirections are used identically as STDIN/STDOUT/STDERR. The device entries are 30,36 for /dev/tcp: mknod /dev/tcp c 30 36 >From the bash reference:
    If host is a valid hostname or Internet address, and port is an integer
port number or service name, Bash attempts to open a TCP connection to the
corresponding socket.

if [ "X$MYEXIT" = "X0" ]; then
  echo "Connection successful. Exit code: $MYEXIT"
  echo "Connection unsuccessful. Exit code: $MYEXIT"

exit $MYEXIT


Explain Shell

Here’s a good resource for all of those who is trying to learn shell and/or figure out complex commands with lots of parameters and pipes – Explain Shell.


You just paste the command and hit the “Explain” button, and the site will decompose the command into parts, providing relevant parts from the manual pages.  There are a few examples to try it out on too.

Accessing current username in sudo scripts on CentOS

I got a bit of a puzzle at work today.  I had a script that was executed as another user via sudo, but I wanted to access the original username in the script, to know who was executing it.  Sudoers manual suggest working with “Defaults env_keep“.  Looking into the /etc/sudoers, I noticed that $USERNAME variable was whitelisted (in line #3 below):

Defaults env_reset

So, I tried to use the $USERNAME variable in my script but it was coming up with empty results.  That made me look deeper into default Bash initialization, and I found out that $USERNAME variable setup wasn’t a part of it.  However, $LOGNAME was (in /etc/profile).  I think, so few people actually use it that nobody noticed or bothered about it until now.  Anyway, the solution now was obvious – simply add $LOGNAME variable to the sudo white list.  Appending this line to the above env_keep ones did the job:

Defaults    env_keep += "LOGNAME"

There. In hopes it will help future generations…

P.S.: All that happened on a more or less default installation of CentOS 6.3, but I’m sure other Red Hat based distributions have a similar issue.

P.P.S.: If your script is ALWAYS invoked via sudo, also have a look at $SUDO_UID, $SUDO_GID, and $SUDO_USER variables.

Color builder for 256-color terminal

Color builder for 256-color terminal

With most Linux terminals now supporting 256 colors, a tool like this one is mighty useful in building the escape sequences.  For off-line use, all code is available in GitHub repository, and all logic behind the calculations is explained in this Habrahabr post (albeit, in Russian).

Bash directory bookmarks

While reading through the comments to this Habrahabr article (in Russian), I came across an excellent tip for the directory bookmarks in bash shell.  Here’s how to set it up.

Firstly, add the following lines to your .bashrc or .bash_profile file:

# Bash Directory Bookmarks
# From:
alias m1='alias g1="cd `pwd`"'
alias m2='alias g2="cd `pwd`"'
alias m3='alias g3="cd `pwd`"'
alias m4='alias g4="cd `pwd`"'
alias m5='alias g5="cd `pwd`"'
alias m6='alias g6="cd `pwd`"'
alias m7='alias g7="cd `pwd`"'
alias m8='alias g8="cd `pwd`"'
alias m9='alias g9="cd `pwd`"'
alias mdump='alias|grep -e "alias g[0-9]"|grep -v "alias m" > ~/.bookmarks'
alias mload='source ~/.bookmarks'
alias ah='(echo;alias | grep "g[0-9]" | grep -v "m[0-9]" | cut -d" " -f "2,3"| sed "s/=/ /" | sed "s/cd //";echo)'

Secondly, if you are already using ~/.bookmarks file for anything, change the two references to it in the above lines to some other file.  It’s where your directory bookmarks will be stored.

Thirdly, if you prefer to save the bookmarks between your bash sessions, add the “mload” command to the end of your .bashrc or .bash_profile file, and “mdump” to your .bash_logout file.

Start a new bash shell and you are all set.

Using this setup is extremely easy.  Navigate to the directory that you want to bookmark, and save it under the numbered bookmark:

$ cd /var/www/html
$ m1

When you want to navigate back to that folder, simply call the numbered bookmarks:

$ g1

If you need to refresh your memory, issue “ah” command (think: aliases help), and it will print out the list of your current numbered bookmarks with the directory paths associated with them.

In case you need more than nine bookmarks, simply extend the lines in your .bashrc or .bash_profile file to run through more numbers, or use some other bookmark naming convention that works for you.


P.S.: A quick Google search points to the author’s page, which links to a more advanced solution – bashmarks.

Managing gettext translations on the command line

I am working on a rather multilingual project in the office currently.  And, as always, we tried a few alternatives before ending up with gettext again.  For those of you who don’t know, gettext is the de facto standard for managing language translations in software, especially when it comes to messages and user interface elements.  It’s a nice, powerful system but it’s a bit awkward when things come to web development.

Anyways, we started using it in a bit of a rush, without doing all the necessary planning, and quite soon ended up in a bit of a mess.  Different people used different editors to update translations.  And each person’s environment was setup in a different way.   All that made its way into the PO files that hold translations.  More so, we didn’t really define the procedure for the updates of translations.  That became a bigger problem when we realized that Arabic has only 50 translated strings, while English has 220, and Chinese 350.  All languages were supposed to have exactly the same amount of strings, even if the actual translations were missing.

So today I had to rethink and redefine how we do it.  First of all, I had to figure out and try the process outside of the project.  It took me a good couple of hours to brush up my gettext knowledge and find some useful documentation online.  Here is a very helpful article that got me started.

After reading the article, a few manuals and playing with the actual commands, I decided on the following:

  1. The source of all translations will be a single POT file.  This file will be completely dropped and regenerated every time any strings are updated in the source code.
  2. Each language will have a PO file of its own. However, the strings for the language won’t be extracted from the source code, but from the common POT file.
  3. All editors will use current project folder as the primary path.  In other words, “.” instead of full path to “/var/www/foobar”.  This will make all file references in PO/POT files point to a relative location to the project folder, ignoring the specifics of each contributor’s setup.
  4. Updating language template files (PO) and building of MO files will be a part of the project build/deploy script, to make sure everything stays as up to date as possible.

Now for the actual code.   Here is the shell script that does the job. (Here is a link to the Gist, just in case I’ll update it in the future.)


LANGS="en_US ru_RU"

# Create template
echo "Creating POT"
rm -f $POT
xgettext \
 --copyright-holder="2012 My Company Ltd" \
 --package-name="Project Name" \
 --package-version="1.0" \
 --msgid-bugs-address="" \
 --language=PHP \
 --sort-output \
 --keyword=__ \
 --keyword=_e \
 --from-code=UTF-8 \
 --output=$POT \
 --default-domain=$DOMAIN \

# Create languages
for LANG in $LANGS
 if [ ! -e "$LANG.po" ]
 echo "Creating language file for $LANG"
 msginit --no-translator --locale=$LANG.UTF-8 --output-file=$LANG.po --input=$POT

echo "Updating language file for $LANG from $POT"
 msgmerge --sort-output --update --backup=off $LANG.po $POT

echo "Converting $LANG.po to $"
 msgfmt --check --verbose --output-file=$ $LANG.po


Now, all you need to do is run the script once to get the default POT file and a PO file for every language.  You can edit PO files with translations for as much as you want.  Then simply run the script again and it will update generated MO files.  No parameters, no manuals, no nothing.  If you need to add another language, just put the appropriate locale in the $LANGS variable and run the script again.  You are good to go.