How to Synchronize WordPress Live and Development Databases

SitePoint runs through a few options that one can use to synchronize WordPress live and development databases.  I’ve linked to  some of these options before, but it’s nice to have them all conveniently together.  The solutions discussed include WordPress-specific tools:

as well as generic tools, such mysqldump, mysqlpump, rsync, and git.

Overall, it’s a pretty complete list of tools.  The one I’d like to add though is WP CLI, which allows a great deal of automation when it comes to WordPress, including things like database imports and exports, post and option management, and more.

 

Google Open Source Website

Google announced its new Open Source website:

Today, we’re launching opensource.google.com, a new website for Google Open Source that ties together all of our initiatives with information on how we use, release, and support open source.

This new site showcases the breadth and depth of our love for open source. It will contain the expected things: our programs, organizations we support, and a comprehensive list of open source projects we’ve released. But it also contains something unexpected: a look under the hood at how we “do” open source.

The site currently features over 2,000 open source projects that Google has released and contributes to.

asciinema – record and share your terminal sessions, the right way

asciinema is a tool to record terminal sessions and share them as videos.  But unlike many other tools that provide this functionality, ascinema does a very smart thing – instead of encoding the session into a video it interactively replays it in a text mode, which allows one to select and copy-paste commands and outputs from the playback.  The resulting “video” is also much lighter and faster than it would be if encoded into a video stream.

This is great for demos, tutorials, and other more technical scenarios.  The website also has a collection of recent and featured public screencasts.

pushd/popd vs. cd

My shell of choice and circumstance for most of my Linux life was Bash.  So, naturally, in my head, shell pretty much equals Bash, and I rarely think or get into situations when this is not true.  Recently, I was surprised by a script failure, which left me scratching my head.  The command that failed in the script was pushd.

pushd and popd, it turns out, are built into Bash, but they are not standard POSIX commands, so not all the shells have them.  My script wasn’t setting the shell explicitly, and end up executing with Dash, which I haven’t even heard of until this day.  The homepage of Dash says the following:

DASH is not Bash compatible, it’s the other way around.

Mkay… So, I’ve done two things:

  1. Set /bin/bash explicitly as my shell in the script.
  2. Switch to “cd folder && do something && cd –“, instead of pushd/popd combination, where possible.

I knew about “cd –” before, but it was interesting to learn if there are any particular differences (hint: there are) between the this approach and the pushd/popd one that I was using until now.  This StackOverflow thread (ok, ok, Unix StackExchange) was very helpful.

Bulletproof Bash : Stop script on error

The other day I’ve been puzzled by the results of a cron job script.  The bash script in question was written in a hurry a while back, and I was under the assumption that if any of its steps fail, the whole script will fail.  I was wrong.  Some commands were failing, but the script execution continued.  It was especially difficult to notice, due to a number of unset variables, piped commands, and redirected error output.

Once I realized the problem, I got even more puzzled as to what was the best solution.  Sure, you can check an exit code after each command in the script, but that didn’t seem elegant of efficient.

A quick couple of Google searches brought me to this StackOverflow thread (no surprise there), which opened my eyes on a few bash options that can be set at the beginning of the script to stop execution when an error or warning occurs (similar to use strict; use warnings; in Perl).  Here’s the test script for you with some test commands, pipes, error redirects, and options to control all that.

#!/bin/bash

# Stop on error
set -e
# Stop on unitialized variables
set -u
# Stop on failed pipes
set -o pipefail

# Good command
echo "We start here ..."

# Use of non-initialized variable
echo "$FOOBAR"
echo "Still going after uninitialized variable ..."

# Bad command with no STDERR
cd /foobar 2> /dev/null
echo "Still going after a bad command ..."

# Good command into a bad pipe with no STDERR
echo "Good" | /some/bad/script 2> /dev/null
echo "Still going after a bad pipe ..."

# Benchmark
echo "We should never get here!"

Save it to test.sh, make executable (chmod +x test.sh), and run like so:

$ ./test.sh || echo Something went wrong

Then try to comment out some options and some commands to see what happens in different scenarios.

I think, from now on, those three options will be the standard way I start all of my bash scripts.