HTTP Status Dogs – Hypertext Transfer Protocol response status codes. And dogs. If you are even a tiny bit familiar with HTTP or dogs, this will put a smile on your face. I’m thinking to use these as default error pages from now on.
OverAPI.com – Collecting All Cheat Sheets
I have a friend who is a newcomer to the world of WordPress. Until recently, he was mostly working with custom-built systems and a PostgreSQL database engine, so there are many topics to cover.
One of the topics that came up today was the performance of the database engine. A quick Google search brought up the Benchmark plugin, which we used to compare results from several servers. (NOTE: you’ll need php-bcmath installed on your server for this plugin to work.)
My friend’s test server showed a rather poor 48 requests / second result. And that’s on an Intel Core2 Duo E4500 machine with 4 GB of RAM and 160 GB 7200 RPM SATA HDD, running Ubuntu 12.04 x86-64.
So, I tried it on my setup. My setup is all on Amazon EC2, using the smallest possible t2.micro servers (that’s Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz, with 1 GB of RAM and god knows what kind of hard disk, running Amazon AMI).
First, I ran the benchmark on the test server, which hosts about 20 sites with low traffic (I didn’t want to bring up a separate instance for just a single benchmark run). MySQL runs on the same instance as the web server. And here are the results:
|Your System||Industry Average|
|CPU Speed:||38,825 BogoWips||24,896 BogoWips|
|Network Transfer Speed:||97.81 Mbps||11.11 Mbps|
|Database Queries per Second:||425 Queries/Sec||1,279 Queries/Sec|
Secondly, I ran the benchmark on one of the live servers, which also hosts about 20 sites with low traffic. Here though, Nginx web server runs on one instance and the MySQL database on another. Here are the results:
|Your System||Industry Average|
|CPU Speed:||37,712 BogoWips||24,901 BogoWips|
|Network Transfer Speed:||133.91 Mbps||11.15 Mbps|
|Database Queries per Second:||1,338 Queries/Sec||1,279 Queries/Sec|
In both cases, MySQL is v5.5.42, running on the /usr/share/doc/mysql55-server-5.5.42/my-huge.cnf configuration file. (I find it ironically pleasing that the tiniest of Amazon EC2 servers fits perfectly for the huge configuration shipped with documentation.)
The benchmark plugin explains how the numbers are calculated. Here’s what it says about the database queries:
To benchmark your database I use your wp_options table which uses the longtext column type which is the same type used by wp_posts. I do 1000 inserts of 50 paragraphs of text, then 1000 selects, 1000 updates and 1000 deletes. I use the time taken to calculate queries per second based on 4000 queries. This is a good indication of how fast your overall DB performance is in a worst case scenario when nothing is cached.
So, it’s a good number to throw around, but it’s far from the realistic site performance, as your WordPress site will mostly get SELECTs, not INSERTs or UPDATEs or DELETEs. And then, you’ll obviously need to see how many SQL queries do you need per page. And then you’ll need to examine all the caching in play – from browser, web server, WordPress, MySQL, and the operating system. And then, and then, and then.
But for a quick measure, I think, this is a good benchmark. It’s obvious that my friend can get a lot more out of his server without digging too deep. It’s obvious that separating web and database server into two Amazon instances gives you quite a boost. And it’s obvious that I don’t know much about performance measuring.
BBC reports that one of the Google data centers experienced a data loss, after a nearby power power facility was struck by lightnings four times in a row. Only about 0.000001% of total disk space was permanently affected, it is said.
A thing called “backup” immediately comes to mind. This was something I had to deal with in pretty much every company I worked for as a sysadmin. Backup your data or lose it, right?
Well, maybe. For most of those companies a dedicated storage or a couple of tape drives could easily solve the problem. But Google is often special in one way or the other.
A quick Google search (hehe, yup) for how much data Google stores, brings up this article from last year, linking to this estimation approach – there are no officially published numbers, so the estimate is all we can do: 10-15 exabytes. (10-15 exabytes, Carl!) And that’s from the last year.
Using this method, they determined that Google holds somewhere around 10-15 exabytes of data. If you are in the majority of the population that doesn’t know what an exabyte is, no worries. An exabyte equals 1 million terabytes, a figure that may be a bit easier to relate to.
Holy Molly, that’s a lot of data! To back this up, you’ll need at least double the storage. And some really lightning-fast (pan intended) technology. Just to give you an idea, some of the fastest tape drives have a throughput of about 1 TB / hour and a native capacity of about 10 TB (have a look here, for example). The backup process will take about … forever to complete.
So if tapes are out, then we are backing up onto another storage. Having the storage in the same data center sort of defeats the purpose (see above regarding “lightning”). Having a storage in another data center (or centers) means you’ll need some super fast networks.
You could probably do quite a bit of optimization with incremental and differential backups, but you’d still need quite a substantial infrastructure.
Simpler, I guess, just spread your data across many data centers with several copies all over the place, and hope for the best.
But that’s for Google. For the rest of us, backup is still an option. (Read some of these horror stories if you are not convinced yet.)
And since we are on the subject of backups, let me ask you this: how are you doing backups? Are you still with tapes, or local NAS, or, maybe, something cloud-based? Which software do you use? What’s your strategy?
For me, dealing with mostly small setups, Amazon S3 with HashBackup is sufficient enough. I don’t even need to rotate the backups anymore. Just do a full daily.
I’ve mentioned Graphviz many a time on this blog. It’s simple to use, yet very powerful. The dot language is something that can be jotted down by hand in the simplest of all text editors, or generated programmatically.
The official website features a gallery, which demonstrates a wide range of graphs. But I still wanted to blog a few examples from my recent use.
In a recent project I crashed into a wall. At least for a couple of days that is. The requirement was to integrate the Request Tracker (aka RT) installation on CentOS 7 server with Nginx to a client’s company single sign-on solution. Which wasn’t LDAP. Or Active Directory. Or anything standard at all – a complete homegrown system.
Install Elastix from USB Step by Step – came in quite handy for the box that has no DVD drive.
This thread was helpful, even though it’s for a smaller, 8-port switch. Basically:
- Disconnect the switch from all possible DHCP servers (unless you like playing hide-and-seek).
- Connect the link port directly to your laptop’s Ethernet port.
- Configure your laptop’s network interface to be in the 192.168.1.1/24 network, but avoid the 192.168.1.254.
- Now find a small pin (I used the one from the office stepler) and push into the Reset hole of the switch for about 30 seconds. You’ll see all the switch ports blink when you are done.
- Remove the pin.
- Start pinging 192.168.1.254 … it’ll take a few minutes before you get a reply.
- Once the ping starts working, navigate to http://192.168.1.254 . It’ll help if your browser doesn’t have any proxy servers configured.
- Login with username cisco, password cisco.
NISE Nexcom Series – a good selection of embedded servers and mini-PCs for home and small office needs. These things don’t require a lot of power or a dedicated cooling system, and have native support for Linux.
Inside NGINX: How We Designed for Performance & Scale