Thoughts on technology, movies, and everything else
I work in technology sector. And I do round a clock, not only from 9 to 5. It is my bread and butter, it is my hobby, it is the fascination of my life. And with the current rate of change particular in information technology (IT), there is always something new to learn, to try, to talk about. I often post news, thoughts, and reviews. And when I do, this is the category I use.
PHPQA all-in-one Analyzer CLI tool. This project bundles together all the usual PHP quality control tools, and then some. It simplifies the installation and configuration of the tools and helps developers to push up the quality control bar on their projects.
php-enqueue – enterprise queue solutions for PHP. There is a number of GitHub repositories for the project. PHP Enqueue supports a number of transports – ampq, stomp, filesystem – and it provides a flexible collection of classes to deal both with the queue manager, as well as a client. If your project needs a message queue, definitely check this one out.
Via this article (in Russian), I came across this blog post discussing the differences between the design of the UI (user interface) and the UX (user experience).
In many cases, the incorrect expectation is that an interface designer by default understands or focuses on user experience because their work is in direct contact with the user. The simple fact is that user interface is not user experience. The confusion may simply be because both abbreviations start with the letter “U”. More likely, it stems from the overlap of the skill-sets involved in both disciplines. They are certainly related areas, and in fact many designers are knowledgeable and competent in both.
However, despite the overlap, both fields are substantially different in nature and – more importantly – in their overall objectives and scope. User interface is focused on the actual elements that interact with the user – basically, the physical and technical methods of input and output. UI refers to the aggregation of approaches and elements that allow the user to interact with a system. This does not address details such as how the user reacts to the system, remembers the system and re-uses it.
My shell of choice and circumstance for most of my Linux life was Bash. So, naturally, in my head, shell pretty much equals Bash, and I rarely think or get into situations when this is not true. Recently, I was surprised by a script failure, which left me scratching my head. The command that failed in the script was pushd.
pushd and popd, it turns out, are built into Bash, but they are not standard POSIX commands, so not all the shells have them. My script wasn’t setting the shell explicitly, and end up executing with Dash, which I haven’t even heard of until this day. The homepage of Dash says the following:
DASH is not Bash compatible, it’s the other way around.
Mkay… So, I’ve done two things:
Set /bin/bash explicitly as my shell in the script.
Switch to “cd folder && do something && cd –“, instead of pushd/popd combination, where possible.
I knew about “cd –” before, but it was interesting to learn if there are any particular differences (hint: there are) between the this approach and the pushd/popd one that I was using until now. This StackOverflow thread (ok, ok, Unix StackExchange) was very helpful.
The other day I’ve been puzzled by the results of a cron job script. The bash script in question was written in a hurry a while back, and I was under the assumption that if any of its steps fail, the whole script will fail. I was wrong. Some commands were failing, but the script execution continued. It was especially difficult to notice, due to a number of unset variables, piped commands, and redirected error output.
Once I realized the problem, I got even more puzzled as to what was the best solution. Sure, you can check an exit code after each command in the script, but that didn’t seem elegant of efficient.
A quick couple of Google searches brought me to this StackOverflow thread (no surprise there), which opened my eyes on a few bash options that can be set at the beginning of the script to stop execution when an error or warning occurs (similar to use strict; use warnings; in Perl). Here’s the test script for you with some test commands, pipes, error redirects, and options to control all that.
# Stop on error
# Stop on unitialized variables
# Stop on failed pipes
set -o pipefail
# Good command
echo "We start here ..."
# Use of non-initialized variable
echo "Still going after uninitialized variable ..."
# Bad command with no STDERR
cd /foobar 2> /dev/null
echo "Still going after a bad command ..."
# Good command into a bad pipe with no STDERR
echo "Good" | /some/bad/script 2> /dev/null
echo "Still going after a bad pipe ..."
echo "We should never get here!"
Save it to test.sh, make executable (chmod +x test.sh), and run like so:
$ ./test.sh || echo Something went wrong
Then try to comment out some options and some commands to see what happens in different scenarios.
I think, from now on, those three options will be the standard way I start all of my bash scripts.
Quora thread on “What are some things you wish you knew when you started programming?” is a goldmine of wisdom. Irrelevant of how experienced you – whether you’ve been programming for decades or just thinking about a new career path, which programming languages and technology stacks you use, whether you’ve completed format education or taught yourself everything you know, I’m sure you’ll find valuable lessons and food for thought in there.
“How to monitor your Linux servers with nmon” article provides some details on how to use the comprehensive server monitoring tool “nmon” (Nigel’s Monitor) to keep an eye on your server or two. If you have more than a handful of servers, you’d probably opt out for a full blown monitoring solution, like Zabbix, but even with that, nmon can be useful for quick troubleshooting, screenshots, and data collection.
I’ve heard of nmon before and even used it occasionally. What I didn’t know was that it can collect system metrics into a file, which can then later be analyzed and graphed with the nmonchart tool.
That’s pretty handy. The extra bonus is that these tools are available in most Linux distributions, so there is no need to download/compile/configure things.
David Walsh shares some thoughts on an impostor syndrome. I’m sure anyone in the tech industry can relate. I certainly do.
“Impostor” is a powerful word but that’s how I have felt during all of my career as a professional web developer. I feel like I’ve learned every day of the ride but I feel like I’m way behind. I feel like people see me as something of an expert where I see myself as an accident waiting to happen. I’m a complete impostor. A fraud.
Apart from the honesty of his feelings, I like his ways of snapping out of it. They do work for me too:
Look at your (hopefully decent) employment history and know that, on a basic level, you’re much more wanted than you’re wanted gone
Log onto the IRC channel of a skill you feel comfortable with and answer questions of those asking
Realize that people who consider themselves “experts”, and don’t go through waves of self doubt, are idiots that are so arrogant to not know what they don’t know
Remember the last time a non-developer friend asked you the most basic of computer-related questions
BLOG! The worst thing that can happen is someone corrects you and you learn something out of it
Review your code and find little nits to fix
One other thing that helps me, is this bit by Joe Rogan:
He talks more generically about life, but I think it’s equally applicable to technology knowledge as well.
For a large project at work, we need to integrate or develop a workflow engine. I worked a little bit with workflow engines in the past, but the subject is way to big and complex for me to claim any expertise in it.
So, I am looking at what’s available these days and what are our options. This post is a collection of initial links and thoughts, and it’s goal is mostly to document my research process and findings, and not to provide any answers or solutions yet.