British Airways to hold a hackathon on a plane above the Atlantic ocean

British Airways to hold a hackathon on a plane above the Atlantic ocean

Isn’t that awesome?

As plane journeys are starting to get increasingly more entertaining with Wi-Fi onboard, Harlem Shakes, and cellphone service, British Airways is taking things a step further. On a transatlantic flight from San Francisco to London, the airline plans to invite 100 innovators to an 11-hour “UnGrounded” hackathon. A number of high-profile founders, CEOs, and venture capitalists will all participate, with the aim of collaborating to create some solutions to global problems.

The group will be tasked with presenting their findings at the DNA Summit workshop after they land in London. UnGrounded is part of a larger push by British Airways to participate in the start-up community. The airline joined up with RocketSpace recently, a startup accelerator based in San Francisco, to gain access to startups and RocketSpace founder Duncan Logan will be onboard. The plane departs on June 12th, with no tablet use during takeoff of course, and we’ll be watching closely to see what 11 hours in the skies above the Atlantic ocean brings to the table.

Best Practical RT training

I’ve spent the better part of Wednesday and Thursday of last week in Amsterdam, at Best Practical’s RT training sessions.

Best Practical

I’ve been using Request Tracker (RT) for many years now.   The first version I saw was 2.x, and it wasn’t my own install, but I did participate in configuration and customization.  I also used it on a daily basis.  That was love from the first sight.  The user interface seemed simple and straightforward (yes, it’s built by techies for techies), internal architecture seemed transparent, it was Open Source Software, and it was written in Perl.  What’s not to love?

Once I left that company, I think I’ve installed RT in pretty much every other company I worked for (there were a couple of exceptions, where the decision wasn’t my).  I’ve also done a couple of side projects, and I used it for my own needs too (going through all 3.x series and now jumping into 4.x).  And the more I used it, the more I loved it.   I even mentioned it in this blog a few times.  Have a look, for example, at this post from 2008, where I describe my toolbox and RT being one of the core tools in it.

For the past year or so, however, I was involved in a slightly different kind of work, while at a slightly different kind of environment.  Long story short, there was no RT and I missed it a lot.  That, though, gave me an opportunity to once again look around.  I was part of the team that researched and evaluated a variety of tools, which, functionally, are similar to what RT can do.  And the more I looked, the more I wanted RT.  Alas, the choice once again wasn’t mine.

That all made me a bit sad and nostalgic.  Why? Why – I was asking myself – can’t I get back to The Golden Times of Perl programming and using capable tools such as RT?  And I couldn’t find a good reason why not.  So, an idea seed was planted in my mind.  I thought about, and I thought more, and the idea grew.  And then I told a few people about it, and all of them seemed to like it, so I decided to pursue it.  Yet another side project, hopefully.  But the one which will be heavily based on RT.

Once the direction was chosen, the stars started to align.  One of the friends, who’s the part of the project, pointed out to me that Best Practical Solutions LLC – the company behind the Request Tracker – is doing a training session in Amsterdam.  Being a US-based company, their European training sessions are few and far apart, so I decided to go.

On one hand, I have to say that it is a bit expensive, especially for a self-funded trip.  On the other hand, it’s an Open Source tool and people who develop it do need to make money somehow, and training is one of the legitimate ways to do so.  On the same hand, I’ve been using the tool to save me countless hours, pulled out hair, and earn me quite a bit of cash in the process.  So it’s all fair.  I guess, it’s not that it’s pricey.  It’s more of me having a bit of a short time to decide, plan, and make it happen.  Anyways.  Was it worth it?

Learning new things about RT

It was!  Every bit so!  It was one of the best technical training sessions that I’ve been too, and I’ve been to a lot.  If I had to compare, I think it was the same level of quality I saw at Red Hat’s RHCE rapid track course.

Kevin Falcone, one of the RT architects and Best Practical’s senior engineers was the presenter.  We only had two days to cover everything, but he came prepared.  In fact, it’s a huge understatement.  He was simply top notch!  He had everything with him that was needed for the sessions – books, slides, notes, you name it.  He is also extremely knowledgeable about the RT’s past, present, and future, about who and how is using it, about what the community is doing, and about all those tiny little details all over the RT universe – from people, concepts and ideas, to bugs, configuration options, and branch names, both merged and not.  Also, while being very serious about his work,  Kevin was easy to talk to, and, what is probably even more surprising, he was also a keen listener, interested in what kind of problems or issues people are having with RT, where they are coming from, and so on.

As for the actual training sessions, I was a little bit worried to be bored out of my mind during the first day.  You see, the first day was mostly for RT users – new and seasoned.  While the second day was focused around more hardcore stuff like installation, configuration, customization, and development.

I’m happy to report, that my worries were baseless.  It turned out that even though I’ve been using RT for so long, I still have huge gaps in knowledge and understanding.  There were quite a few things I had no idea about, and there were a few that I knew about, but where an easier way exists, or my understanding wasn’t totally correct.

Here is an example of the knowledge bit that made quite a few people in the room go “Oh! Wow! Why didn’t I learn about it before?”.  In default (and many instances of non-default) configuration, RT would send an automatic email notification to the requester, upon the creation of a new ticket.  That’s a handy bit of functionality, served by a global scrip.  But, what happens if you have, say, 500 queues, and you don’t wan’t to sand such a notification for only two of those queues?  Until the training, I knew of two ways how to do it.  One involved removing the global scrip and re-creating a queue-level scrip only for those queues that needed to send the notification.  The other way would be to update the scrip code to check the current queue against a whitelist of queues.  Both would work, but neither one is elegant.  Well, apparently, there is a better, more elegant way, which also sheds some more light onto how RT “thinks”.  All you have to do is create a new queue-level AutoReply template with no body.  The same global scrip would execute, but would try to use the queue-level template, instead of the global one.  However, RT is being smart in a way, where there is nothing to send it won’t even try.  So an empty queue-level template would result in an email with no content, which RT simply won’t send out.  Brilliant, isn’t it?

But, as much as the first day was useful, the second day was a total blast.  We’ve covered a lot of ground (and had to move through the slides pretty fast at times), but that was like … like … like Neo learning kung fu in the Matrix movie: huge amounts of knowledge and wisdom just being uploaded straight into the brain.  I think one of the reasons that we could move so fast was due to everyone in the room having plenty of prior RT experience.  Everyone knew what Kevin was talking about, and there was instant insight and understanding.  Or so I think.

Some of the useful things that I’ve learned during the second day included:

  • the work with custom fields.  Creating, editing, searching (including saved searches), linking, and automatic population and extraction;
  • local installs and hacking.  Installing RT isn’t much of a deal for me at this stage, but knowing a simpler and faster way of just getting a local copy running for a bit hacking here and there – is always welcome;
  • initial data.  Something that is extremely useful for initializing new installs with queues, users, groups, custom fields, etc, as well as copying or automating batch operations of data input;
  • safe ways of doing and preserving local changes. I knew about RT extensions for a while now, but I could never bother enough to figure out how to do them myself.  Now I know.

Also, I’ve learned a great deal about RT’s roadmap for the upcoming version 4.2 and for the next probable version 4.4.

I have to say that the slide and training “program” wasn’t all that happened.  While we were sticking to the slides pretty close, Kevin made plenty of effort to focus more on the things which were interesting to people in the room, and to focus less on irrelevant bits and pieces.  A lot of questions were asked and answered – varying everywhere from localization to performance optimization and database tuning.

As I said before, Kevin is very knowledgeable about “what’s out there for RT”, so, as we joked in the room, the phrase of the day was: “there is a branch for that“.  Many a time that someone would ask for  specific piece of functionality or configuration, Kevin would say that there is a branch for that and then find an appropriate git branch in a matter of seconds.  Just to give you an indication of how tricky that is, consider the fact that there are currently 192 branches listed in RT’s GitHub repo.  (One of the most branched out project I manage at work has just over 20 branches, and I’m lost more times than there are days in a week.)

As far as technical questions go, I don’t think there was a single one that hasn’t been answered.  In fact, there were so many questions, that we kept asking way beyond the time allocated for the session, but Kevin stood his ground until each and every one of those puzzles wasn’t answered.

Oops.  That got lengthy all of a sudden.  I promise you that’s just an accident.  To wrap up, I have to say once again that it was an excellent experience, I’ve learned plenty, it was worth every single penny, and I strongly recommend the next sessions to anyone and everyone who is using RT.  I promise you, even if you think you know it all, you’ll learn plenty a new.

 

GitHub turns into an IDE

OK, maybe not an IDE just yet, but it’s not just a social network or a version control web interface anymore.  For a while now, you could create new files, and edit existing files.  Now, you can also move existing files around.

GitHub : move files

 

The implication of all these features together is that now you don’t really need to have a local working environment.  You can work on the projects using just the GitHub’s web interface.  Of course, it’s not the most convenient way in the world, and you’d be missing a lot of commonly used features, but still, if you are on the go, or if you have an urgent change to make when away from your usual working environment, GitHub has you covered.  Well done, guys! Keep it up.

 

 

Google Reader alternative quest

After the news of Google Reader demise broke out, I, like many others, started looking for an alternative.  There are many RSS readers out there, both free and commercial, but none of them is quite like Google Reader.   So, I thought, I’d share my adventures in hopes of more suggestions.

First of all, here are the things that I am looking for in an RSS reader:

  • Web based.  This is a requirement for me.  I want to be able to access my subscriptions from any computer connected to the World Wide Web.
  • OPML import and/or Google Reader synchronization.  I have around 300 feeds in the Google Reader currently.  I am not going to resubscribe to each one by one and reorganized them again.  Ideally, I want to have a Google Reader sync, which will mark the read items, etc.  In the worst case scenario, at least the OMPL import, so I can batch add all the feeds.
  • Rich content support.  I want to see embed images and videos in the feed items.  I want the text to have style.
  • Mobile app.  This is not a requirement per se, but a much wanted option.  I read a lot of RSS on the go.
  • Free.  Again, not a requirement, but a much wanted option.

Here is a list of the ones I tried:

  • The Old Reader. It looks like the old Google Reader, but it suffers now from all the spike of new accounts.  I’m trying to import my OPML, but I’m 30,000+ down in the queue.  The number keeps going up and down for the last two days, so I’m not sure when I’ll be able to actually use the service.
  • Tiny Tiny RSS. I’ve installed it on my server and it does work somewhat well.  But the styling is very weak, and the experience is quite different from the Google Reader.  It will take me forever to get used to it, and while doing so, I’ll be constantly thinking of patching it up.  Removed, for now.
  • BazQux Reader.  I have reviewed this service a while ago.  It only got better with time.  In fact, this is the closest experience to Google Reader with a few extra bonuses, like item comments.  The service is not free, but not too pricey – choose between $9, $19, and $29 per year.  As far as the migration from Google Reader goes, this is the fastest service – two clicks, and you are already reading your feeds.  The only downside I see is mobile experience.  I couldn’t find the app for Android, and the website is not suited for smaller screens.
  • Feedly.  The best styling of all I’ve tried.  Nice mobile app.  But requires a browser extension on the desktop.  Also, the experience is a bit different from the Google Reader, so needs some getting used to.

So, as you can see, I am yet to decide.  There are also quite a few alternatives that I haven’t tried yet.  From the ones I’ve tried though, the two most likely candidates are Feedly and BazQux Reader.  Feedly looks beautiful and works well on the mobile.  BazQux Reader provides the best experience on the desktop.

Which ones have you tried and what’s your most likely alternative?  Have you made up your mind yet?

SSH dynamic black list

Slashdot runs the post on how bots are now trying higher ports for SSH password guessing.  This is not a problem for those who do key-based authentication, but for those who have to have password authentication enabled, there is plenty of good advice in the comments to the post.  One of the comments provides this handy iptables-based dynamic black list:

iptables --new-chain SSHTHROTTLE
iptables --append SSHTHROTTLE --match recent --name bad_actors --update --seconds 86400 --jump DROP
iptables --append SSHTHROTTLE --match hashlimit --hashlimit-name ssh_throttle --hashlimit-upto 5/hour --hashlimit-mode srcip --hashlimit-burst 2 --jump ACCEPT
iptables --append SSHTHROTTLE --match recent --name bad_actors --set --jump DROP
iptables --append INPUT --in-interface ext+ --proto tcp --match conntrack --ctstate NEW --dport 22 --syn --jump SSHTHROTTLE

I haven’t tried it out myself yet, but I’m saving it here for the next time I have a server with password-based authentication SSH.