From this article, I’ve learned about an excellent (for our times) 10k Apart competition:
Think you’ve got what it takes? You have until September 30th.
I can’t wait to see the submissions and all the ways to squeeze the awesomeness of the modern web into just 10 kilobytes. This reminds me of the Perl Golf posts over at PerlMonks and
Assembly PC 64K Intro from my childhood early days (here are some examples).
The History of the URL is a brilliant compilation of ideas and resources, explaining how we got to the URLs we use and love (or hate) today. In fact, the article comes in two parts:
- Domain, protocol, and port
- Path, fragment, query, and auth
Read them in whatever order you prefer. But I guarantee that you’ll have a number of different responses through out, from “Wow! I never knew that” and “I would have never thought of that!” to “No way! I don’t believe it“.
And here is one of the bits that made me smile:
In 1996 Keith Shafer, and several others proposed a solution to the problem of broken URLs. The link to this solution is now broken. Roy Fielding posted an implementation suggestion in July of 1995. The link is now broken.
json2html – HTML Templating Engine. Available both as jQuery plugin and node.js package.
At a recent Google I/O 2015 conference, a production ready version 1.0 of Polymer library was announced. If you are not familiar with this tool, and a brief description like:
The Polymer library is designed to make it easier and faster for developers to create great, reusable components for the modern web.
doesn’t help much, then you should definitely check the Get Started section. You’ll love it! Once you know what it does and how it works, check the current Catalog of the elements.
Via The Next Web.
Sitecake – Simple CMS for your HTML website
Until now (in fact, even yesterday) I was telling people that Google uses the HTML <title> tag of the given page when displaying search results. Turns out, this is not always true.
When working on a long running projects, it’s easy to lose track of HTML and CSS standard compliance. Also, link rot is a common occurrence. Gladly, there are command line tools that can be executed on a regular basis (think weekly or monthly cron jobs), that would check the site and report any issues with it. Here is one of the ways.
Installation on Fedora:
yum install linkchecker
yum install python-tidy
yum install python-cssutils
Example command line:
linkchecker -t20 --check-html --check-css http://mamchenkov.net
Obviously, check the manual of linkchecker for more options.