Shields.io – quality metadata badges for open source projects

Shields.io provides a large collection of badges that you can use in your project documentation (like README.md over at GitHub or BitBucket), which shows a variety of metrics for the project – latest version, number of downloads, build status, and more.  Pretty much anything that you’ve seen used by any project on GitHub is supported (I couldn’t think of a badge that wasn’t).

Now, if only there was a way to insert these things automatically somehow …

composer-git-hooks – manage git hooks in your composer config

composer-git-hooks looks awesome!  From the project page description:

Manage git hooks easily in your composer configuration. This package makes it easy to implement a consistent project-wide usage of git hooks. Specifying hooks in the composer file makes them available for every member of the project team. This provides a consistent environment and behavior for everyone which is great.

asciinema – record and share your terminal sessions, the right way

asciinema is a tool to record terminal sessions and share them as videos.  But unlike many other tools that provide this functionality, ascinema does a very smart thing – instead of encoding the session into a video it interactively replays it in a text mode, which allows one to select and copy-paste commands and outputs from the playback.  The resulting “video” is also much lighter and faster than it would be if encoded into a video stream.

This is great for demos, tutorials, and other more technical scenarios.  The website also has a collection of recent and featured public screencasts.

GIT quick statistics

Any git repository contains a tonne of information about commits, contributors, and files.  Extracting this information is not always trivial, mostly because of a gadzillion options to a gadzillion git commands – I don’t think there is a single person alive who knows them all.  Probably not even Linus Torvalds himself.

git-quick-stats is a tool that simplifies access to some of that information and makes reports and statistics quick and easy to extract.  It also works across UNIX-like operating systems, Mac OS X, and Windows.

PHPQA all-in-one Analyzer CLI tool

PHPQA all-in-one Analyzer CLI tool.  This project bundles together all the usual PHP quality control tools, and then some.  It simplifies the installation and configuration of the tools and helps developers to push up the quality control bar on their projects.

The tools currently included are:

php-enqueue – enterprise queue solutions for PHP

php-enqueue – enterprise queue solutions for PHP.  There is a number of GitHub repositories for the project.  PHP Enqueue supports a number of transports – ampq, stomp, filesystem – and it provides a flexible collection of classes to deal both with the queue manager, as well as a client.  If your project needs a message queue, definitely check this one out.

The Difference Between UI and UX

Via this article (in Russian), I came across this blog post discussing the differences between the design of the UI (user interface) and the UX (user experience).

In many cases, the incorrect expectation is that an interface designer by default understands or focuses on user experience because their work is in direct contact with the user. The simple fact is that user interface is not user experience. The confusion may simply be because both abbreviations start with the letter “U”. More likely, it stems from the overlap of the skill-sets involved in both disciplines. They are certainly related areas, and in fact many designers are knowledgeable and competent in both.

However, despite the overlap, both fields are substantially different in nature and – more importantly – in their overall objectives and scope. User interface is focused on the actual elements that interact with the user – basically, the physical and technical methods of input and output. UI refers to the aggregation of approaches and elements that allow the user to interact with a system. This does not address details such as how the user reacts to the system, remembers the system and re-uses it.

pushd/popd vs. cd

My shell of choice and circumstance for most of my Linux life was Bash.  So, naturally, in my head, shell pretty much equals Bash, and I rarely think or get into situations when this is not true.  Recently, I was surprised by a script failure, which left me scratching my head.  The command that failed in the script was pushd.

pushd and popd, it turns out, are built into Bash, but they are not standard POSIX commands, so not all the shells have them.  My script wasn’t setting the shell explicitly, and end up executing with Dash, which I haven’t even heard of until this day.  The homepage of Dash says the following:

DASH is not Bash compatible, it’s the other way around.

Mkay… So, I’ve done two things:

  1. Set /bin/bash explicitly as my shell in the script.
  2. Switch to “cd folder && do something && cd –“, instead of pushd/popd combination, where possible.

I knew about “cd –” before, but it was interesting to learn if there are any particular differences (hint: there are) between the this approach and the pushd/popd one that I was using until now.  This StackOverflow thread (ok, ok, Unix StackExchange) was very helpful.

Bulletproof Bash : Stop script on error

The other day I’ve been puzzled by the results of a cron job script.  The bash script in question was written in a hurry a while back, and I was under the assumption that if any of its steps fail, the whole script will fail.  I was wrong.  Some commands were failing, but the script execution continued.  It was especially difficult to notice, due to a number of unset variables, piped commands, and redirected error output.

Once I realized the problem, I got even more puzzled as to what was the best solution.  Sure, you can check an exit code after each command in the script, but that didn’t seem elegant of efficient.

A quick couple of Google searches brought me to this StackOverflow thread (no surprise there), which opened my eyes on a few bash options that can be set at the beginning of the script to stop execution when an error or warning occurs (similar to use strict; use warnings; in Perl).  Here’s the test script for you with some test commands, pipes, error redirects, and options to control all that.

#!/bin/bash

# Stop on error
set -e
# Stop on unitialized variables
set -u
# Stop on failed pipes
set -o pipefail

# Good command
echo "We start here ..."

# Use of non-initialized variable
echo "$FOOBAR"
echo "Still going after uninitialized variable ..."

# Bad command with no STDERR
cd /foobar 2> /dev/null
echo "Still going after a bad command ..."

# Good command into a bad pipe with no STDERR
echo "Good" | /some/bad/script 2> /dev/null
echo "Still going after a bad pipe ..."

# Benchmark
echo "We should never get here!"

Save it to test.sh, make executable (chmod +x test.sh), and run like so:

$ ./test.sh || echo Something went wrong

Then try to comment out some options and some commands to see what happens in different scenarios.

I think, from now on, those three options will be the standard way I start all of my bash scripts.