This website uses cookies

I’m running Google AdSense on this website to help me get a few cents for the hosting bill (it’s literally cents, not millions of dollars, like some of you apparently think).  Google now in compliance with EU Cookie Law requires publishers to have the cookie warning.

Please ensure that you comply with this policy as soon as possible, and not later than 30th September 2015.

If your site or app does not have a compliant consent mechanism, you should implement one now. To make this process easier for you, we have compiled some helpful resources at cookiechoices.org.

Usually, I don’t care about these things, or avoid them all together.  But since we are facing similar issues at work, I decided to run with it and see how it works and if it has any affect at all.

Gladly, I didn’t have to do any work at all.  The good folks have already implemented the Cookie Law Info plugin for WordPress, so that’s what I have now.  You have the choice to either accept the cookies, or leave the site.  I’m not going to fish out each cookie one by one and explain what it does.  Nobody cares. And if you do, you are probably here by mistake anyway.

WordPress Benchmark of MySQL server on Amazon EC2

I have a friend who is a newcomer to the world of WordPress.  Until recently, he was mostly working with custom-built systems and a PostgreSQL database engine, so there are many topics to cover.

One of the topics that came up today was the performance of the database engine.  A quick Google search brought up the Benchmark plugin, which we used to compare results from several servers.  (NOTE: you’ll need php-bcmath installed on your server for this plugin to work.)

My friend’s test server showed a rather poor 48 requests / second result.  And that’s on an Intel Core2 Duo E4500 machine with 4 GB of RAM and 160 GB 7200 RPM SATA HDD, running Ubuntu 12.04 x86-64.

So, I tried it on my setup.  My setup is all on Amazon EC2, using the smallest possible t2.micro servers (that’s Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz, with 1 GB of RAM and god knows what kind of hard disk, running Amazon AMI).

First, I ran the benchmark on the test server, which hosts about 20 sites with low traffic (I didn’t want to bring up a separate instance for just a single benchmark run).  MySQL runs on the same instance as the web server.  And here are the results:

Your System Industry Average
CPU Speed: 38,825 BogoWips 24,896 BogoWips
Network Transfer Speed: 97.81 Mbps 11.11 Mbps
Database Queries per Second: 425 Queries/Sec 1,279 Queries/Sec

Secondly, I ran the benchmark on one of the live servers, which also hosts about 20 sites with low traffic. Here though, Nginx web server runs on one instance and the MySQL database on another. Here are the results:

Your System Industry Average
CPU Speed: 37,712 BogoWips 24,901 BogoWips
Network Transfer Speed: 133.91 Mbps 11.15 Mbps
Database Queries per Second: 1,338 Queries/Sec 1,279 Queries/Sec

In both cases, MySQL is v5.5.42, running on the /usr/share/doc/mysql55-server-5.5.42/my-huge.cnf configuration file. (I find it ironically pleasing that the tiniest of Amazon EC2 servers fits perfectly for the huge configuration shipped with documentation.)

The benchmark plugin explains how the numbers are calculated. Here’s what it says about the database queries:

To benchmark your database I use your wp_options table which uses the longtext column type which is the same type used by wp_posts. I do 1000 inserts of 50 paragraphs of text, then 1000 selects, 1000 updates and 1000 deletes. I use the time taken to calculate queries per second based on 4000 queries. This is a good indication of how fast your overall DB performance is in a worst case scenario when nothing is cached.

So, it’s a good number to throw around, but it’s far from the realistic site performance, as your WordPress site will mostly get SELECTs, not INSERTs or UPDATEs or DELETEs. And then, you’ll obviously need to see how many SQL queries do you need per page. And then you’ll need to examine all the caching in play – from browser, web server, WordPress, MySQL, and the operating system. And then, and then, and then.

But for a quick measure, I think, this is a good benchmark. It’s obvious that my friend can get a lot more out of his server without digging too deep. It’s obvious that separating web and database server into two Amazon instances gives you quite a boost. And it’s obvious that I don’t know much about performance measuring.

Backup your data! Unless you are Google

BBC reports that one of the Google data centers experienced a data loss, after a nearby power power facility was struck by lightnings four times in a row.  Only about 0.000001% of total disk space was permanently affected, it is said.

A thing called “backup” immediately comes to mind.  This was something I had to deal with in pretty much every company I worked for as a sysadmin.  Backup your data or lose it, right?

Well, maybe.  For most of those companies a dedicated storage or a couple of tape drives could easily solve the problem.  But Google is often special in one way or the other.

A quick Google search (hehe, yup) for how much data Google stores, brings up this article from last year, linking to this estimation approach – there are no officially published numbers, so the estimate is all we can do: 10-15 exabytes.  (10-15 exabytes, Carl!)  And that’s from the last year.

Using this method, they determined that Google holds somewhere around 10-15 exabytes of data. If you are in the majority of the population that doesn’t know what an exabyte is, no worries. An exabyte equals 1 million terabytes, a figure that may be a bit easier to relate to.

Holy Molly, that’s a lot of data!  To back this up, you’ll need at least double the storage.  And some really lightning-fast (pan intended) technology.  Just to give you an idea, some of the fastest tape drives have a throughput of about 1 TB / hour and a native capacity of about 10 TB (have a look here, for example).  The backup process will take about … forever to complete.

So if tapes are out, then we are backing up onto another storage.  Having the storage in the same data center sort of defeats the purpose (see above regarding “lightning”).  Having a storage in another data center (or centers) means you’ll need some super fast networks.

You could probably do quite a bit of optimization with incremental and differential backups, but you’d still need quite a substantial infrastructure.

Simpler, I guess, just spread your data across many data centers with several copies all over the place, and hope for the best.

But that’s for Google.  For the rest of us, backup is still an option.  (Read some of these horror stories if you are not convinced yet.)

And since we are on the subject of backups, let me ask you this: how are you doing backups?  Are you still with tapes, or local NAS, or, maybe, something cloud-based?  Which software do you use? What’s your strategy?

For me, dealing with mostly small setups, Amazon S3 with HashBackup is sufficient enough.  I don’t even need to rotate the backups anymore. Just do a full daily.

Rank of top languages on GitHub.com over time

GitHub blog shares some trends in regards to programming languages, which includes both public and private repositories:

GitHub programming languages

Interesting.  I haven’t seen many Java and C# projects myself, but I’m in a very different bubble.  PHP stays on #4 for years.  VimL, the language in which most plugins for Vim editor are written, makes it to #10 in 2010, which suggests that there are way more plugins than I ever thought.  The drop in Perl is also quite notable, but not very surprising.