WPScan Vulnerability Database – covers not on the WordPress core, but also themes and plugins.
OctroTree – Google Chrome extension for browsing GitHub code repositories. I promise you, this is one of those things that you wouldn’t believe you lived without before. Fast, convenient, with support for private repositories (via API access token), GitHub Enterprise, and keyboard shortcuts. Absolutely essential for anyone who is on GitHub!
It’s after bits like this one, I think I should spend more time reading documentation:
Create a new transaction.
This routine should _never_ be called by anything other than RT::Ticket. It should not be called from client code. Ever. Not ever. If you do this, we will hunt you down and break your kneecaps. Then the unpleasant stuff will start.
TODO: Document what gets passed to this
AWS Official Blog covers the upcoming leap second shenanigans in “Look Before You Leap – The Coming Leap Second and AWS“:
The International Earth Rotation and Reference Systems (IERS) recently announced that an extra second will be injected into civil time at the end of June 30th, 2015. This means that the last minute of June 30th, 2015 will have 61 seconds. If a clock is synchronized to the standard civil time, it will show an extra second 23:59:60 on that day between 23:59:59 and 00:00:00. This extra second is called a leap second. There have been 25 such leap seconds since 1972. The last one took place on June 30th, 2012.
Not all applications and systems are properly coded to handle this “:60” notation.
SingleHop – a cloud-based hosting company – created this infographic on the cost of loss for when your backups aren’t up to the par. This should work well as a reminder, especially if printed out and hung on the wall in front of a sysadmin (but also somewhere, where the management can occasionally see it too).
I spent a large chunk of yesterday experimenting with Vagrant on my Fedora 21 laptop. I’ve used it before of course, but a friend asked for help with something I was planning to play with for a long time, so it unexpectedly lead me into a journey.
Let’s start simple. If you want the least possible amount of hassle with running Vagrant on Fedora, you should use it with Oracle VirtualBox provider (sometimes also called hypervisor). It works great! The only troubles with this approach is that VirtualBox relies on a kernel module (kmod-VirtualBox RPM), which has to match your current running kernel version to a digit. This kernel module is NOT part of the official Fedora repositories, and, instead, can be found in the RPM Fusion yum repository (rpmfusion-free-updates). This means that sometimes, when Fedora releases a kernel update, it might take a few days for the RPM Fusion repository to catch up with the kmod-VirtualBox updates. And this, of course, might result in your Vagrant setup being broken.
The easiest way to protect against that is to disable automatic kernel, kernel module and VirtualBox updates. To do so, add the following line to the [main] section of your /etc/yum.conf file, right after your VirtualBox/vagrant setup started to work:
exclude=kernel* kmod-* VirtualBox*
Now, if you forgot to do that a few times got pissed off with this situation (or don’t like Oracle for some reason), you might consider alternatives. Which are a few. Vagrant supports a variety of hypervisors. One of the common alternatives is to use libvirt, which is shipped with Fedora distribution.
Installing libvirt is simple (thanks to this blog post). Here’s pretty much all you have to do:
yum install libvirt libvirt-daemon libvirt-daemon-qemu virt-manager service libvirtd restart
The problem that you might realize now is that libvirt is not the most popular provider for boxes in the Vagrant world. Most people seem to prefer VirtualBox. But if your choices are satisfied, I’m glad for you. If they are not, however, there is a work around that you might go for – vagrant mutate plugin. This plugin converts vagrant boxes from one hypervisor to another.
In order to install this plugin on Fedora 21 you’ll need a few development tools first (this StackOverflow thread definitely helped with the weird g++ error):
yum install ruby-devel gcc-c++ make
Once you have those, install the vagrant plugin with your regular user (the one who will run vagrant VMs):
vagrant plugin install vagrant-mutate
Now you can mutate Vagrant boxes. Unfortunately, you might find that mutate plugin doesn’t like boxes with slash in their names (like chef/centos-6.5). The suggested workaround is to either use box names without slashes, or to provide mutate plugin with the box URLs, rather than names. The official boxes directory doesn’t give you URLs though, so you might be stack with random GitHub repositories or with an alternative directory, like Vagrantbox.es.
My adventures with this aren’t over yet. Feel free to send suggestions my way. From my side, here are a couple of other useful links on this subject:
- It looks like the upcoming Fedora 22 will handle things better.
- If you are using Vagrant boxes on Windows, you are probably familiar with file permission issues across synced folders.
- If you want to have several VMs with Vagrant, here are some handy configuration snippets for those who aren’t well versed in Ruby.
One last bit of advise from me is that until you are absolutely sure that your Vagrant setup works perfectly, stick to 32-bit box images. There’s nothing like ripping your hair out for three hours only to learn that your host hardware is 32-bit while you are trying to boot into a 64-bit operating system.
I came across “Do Not Use Amazon Linux” opinion on Ex Ratione. I have to say that I mostly agree with it. When I initially started using Amazon Web Services, I assumed (due to time constraints mostly) that Amazon Linux was a close derivative of CentOs and I opted for that. For the majority of things that affect applications in my environment that holds true, however it’s not all as simple as it sounds.
There are in fact differences that have to be taken into account. Some of the configuration issues can be abstracted with the tools like Puppet (which I do use). But not all of it. I’ve been bitten by package names and version differences (hello PHP 5.3, 5.4, and 5.5; and MySQL and MariaDB) between Amazon AMI and CentOS distribution. It’s an absolute worst when trying to push an application from our testing and development environments into the client’s production environment. Especially when tight deadlines are involved.
One of the best reasons for CentOS is that developers can easily have their local environments (Vagrant anyone?) setup in an exactly the same way as test and production servers.
Linux Insides – a little bit about a Linux kernel
That warm, fuzzy feeling when someone broke something and you were able to restore it from the backup…
Amazon Elastic File System, or EFS for short, is the missing piece of the cloud puzzle. With all those EC2 instances, elastic load balances and IAM roles, one would often need a shared file system. Until now, you’d either be using either an S3-based solution, which scales well in terms of price and storage, but lacks in common tools support and sometimes in real-time synchronization; or an EBS-based solution, which performs way better (especially with SSD-backed storage) and works like a regular file system, but is a bit more pricey and lacking, being a block-level solution, the sharing option – so you’d have to build something like a GlusterFS solution or an NFS server, both of which have their own issues.
So, the arrival of the EFS, even as a preview for now, will bring joy to many.
Amazon EFS is a new fully-managed service that makes it easy to set up and scale shared file storage in the AWS Cloud. Amazon EFS supports NFSv4, and is designed to be highly available and durable. Amazon EFS can support thousands of concurrent EC2 client connections with consistent performance, making it ideal for a wide range of use cases, including content repositories, development environments, and home directories, as well as big data applications that require on-demand scaling of file system capacity and performance.
(Quote from the webinar pitch)
In terms of integration, it looks easy for the Linux crowd – NFSv4 option is there. What’s happening in the Windows world, I’m not that aware though. Gladly, that’s not my problem to worry.
In terms of pricing, this looks a bit expensive. The calculations are in GB-Months, with the current price being $0.30 per GB-Month. An example for 150 GB used over the first two weeks of the month and 250 GB sued over the second half of the month, yields a 177 GB-Month average at a cost of $53.10 USD. Even knowing that EFS is riding on SSD-based hardware and should be quite fast, the price is high. Amazon is known however for its regular price reductions.
So for now, I’d wait. It’s good to know that the option is there (or almost there, preview still pending). But for the masses to jump onto it, it’ll need to calm down its dollar hunger a bit.