Three years at Qobo

Today is my third birthday as the Qobo CTO.  Here are the summary posts for the first and second years, if you are interested.

I haven’t had a boring year at Qobo yet.  And this last one was the most eventful and interesting, both business and technology wise.  Let’s have a quick look at the business side of things first.

Since last August, here are some of the things that happened:

  • In October 2016 we opened the Limassol office.  It’s mostly used by developers and for developers.  But we had a few client meetings in there too.  If things go as fast an good as they are going, we’ll need to either expand it soon, or move to the new premises.
  • In December 2016 we closed the deal with our first angel investors.  That was quite a lengthy and tedious process, during which we spoke to a lot of organizations and individuals, went through a variety of checks and audits, and figured out answers to many questions that we’ve never asked ourselves.  As a result, we found partners who not only brought the money in for the company growth, but a wide range of business expertise.  Personally, I’ve learned a lot during and after this process, and hopefully will boost my understanding of the business world.
  • In March 2017 we received the confirmation that our application for the Research and Development grant from the Cyprus government and European Union was approved.  This process also took quite a bit of effort and time and is far from over.  This money will help Qobo to grow even further and once the tender is complete we’ll have quit a thing to show for it.  That’s about as much as I can say now.
  • In May 2017 we opened the London office.  This one is mostly for our sales force and the expansion of business into the UK market.
  • Over the course of the whole year, we have grown our team quite a bit as well.  We are now about 15 people, but it’s not the quantity that matters, but eh quality.   We manage to bring in some people that I worked with in many previous jobs (Easy Forex, FXCC, Tototheo Group, FxPro, and even as far back as PrimeTel).  And by the looks of it, the team will continue to grow.
  • And much like in previous years, we have signed more clients, did more projects, and delivered more solutions, both locally, here in Cyprus and abroad (primarily United Kingdom).

Now let’s have a look at technology a bit more.  Last year I mentioned Qobrix, but I could give you any more details.  Today, Qobrix is a real thing.  It’s our own platform for building business applications rapidly.  We developed it to a very usable state, and built quite a few applications with it, anything from custom processes, Intranets, and all the way up to the CRMs.  The platform is being actively developed and is maturing every day.  We have also started building a new website that provides plenty of information for it.

Big chunks of our development effort are being released as Open Source software – have a look at our ever-growing GitHub profile.  We have also contributed to a number of Open Source projects in both CakePHP and WordPress ecosystems.

We are also getting much better at this whole cloud computing thing.  Our knowledge of Amazon Web Services (AWS) is growing and improving.  We have more servers now, use more services, and planning to expand even further.

Overall, as you can see, this was quite an intensive year, and it doesn’t look like things are slowing down.  Quite the opposite.  After three years at Qobo, I have to say that this is hands down the best job I ever had (and I had some pretty amazing jobs in the last couple of decades).  I’m learning a lot every single day.  I see the impact of my effort on the company as a whole, on the team, and on our clients.  And I am still humbled by the expertise and virtues of people around me.

I’d like to thank everybody around me for all the wisdom, tips, hard work, and joyful moments during the last year.  I’ll be raising my glass tonight for many more years like this one.  Cheers!

London Trip

Swan Pub, London, UK

As some of you already know, I’ve spent most of this week in London, UK.  My first and only time in London was back in 2009, when I went there for a PHP conference (see this post, and this post).

This trip was very different.  I stayed longer than the last time.  I was mostly for business.  I had much less time to explore the city as a tourist.  So I thought I’d write it up, in case I case I need to remember some of it later.

Continue reading “London Trip”

Is group chat making you sweat?

Jason Fried has an excellent write-up on the pros and cons of using group chat for the team communications, and some of the ways to make it better. We use HipChat in the company and while it’s vital to our operations and I can’t even begin to think how we could do what we do without it, it does have some negative side effects – exactly as James describes them.

The most valuable advice out of that long article is this one (I’ve heard it before a few times, but it’s worth repeating):

Think about it like sleep. If someone was interrupted every 15 minutes while they were trying to sleep, you wouldn’t think they’d be getting a good night’s sleep. So how can getting interrupted all day long lead to a good day’s work?

 

Two years at Qobo

Today marks the completion of my second year at Qobo Ltd.  The first year was quite a ride.  But the second one was even wilder.  As always, it’s difficult (and lengthy) to mention everything that happened.  A lot of that stuff is under the non-disclosure agreement (NDA) terms too.  But here are a few generic highlights:

  • Vision and strategy – most of my first year has been spent in putting out fires, fixing things big and small, left, right, and center.  The technology boost was necessary across the board, so it didn’t leave much time for the vision and strategy.  I feel that we’ve made a huge progress in this area in the last 12 month.  We have a clear vision.  We have all the stakeholders agreeing on all key elements.  We have worked out a strategy on how to move forward.  And we’ve started implementing this strategy (hey, Qobrix!).  In terms of achievements, I think this was the most important area and I am pretty happy with how things are shaping up.
  • Team changes – much like in the first year, we had quite a few changes in the team.  Some of them were unfortunate, others not so much.  The team is still smaller than what we want and need, but I think we are making progress here.  If our World Domination plans will work out to even some degree, we’ll be in a much better place very soon.
  • Technology focus – we’ve continued with our goal of doing fewer things but doing them better.  Our expertise in WordPress, CakePHP and SugarCRM grew a lot.  We’ve signed and deployed a variety of projects, which resulted in more in-depth knowledge, more networking with people around each technology, more tools and practices that we can reuse in our future work.
  • Open Source Softwareour GitHub profile is growing, with more repositories, pull requests, releases, features, and bug fixes.  We’ve also contributed to a variety of Open Source projects.  Our involvement with Open Source Software will continue to grow – that’s one of those things that I am absolutely sure about.
  • Hosting, continuous integration and delivery (CI/CD), and quality assurance – again, the trend continued this year.  We are using (and understanding) more of the cloud infrastructure in general and Amazon AWS in particular.  We have a much better Zabbix setup.  And our love and appreciation of Ansible grows steeply. Let’s Encrypt is in use, but we’ll grow it to cover all our projects soon.  We are also experimenting with a variety of quality assurance tools.  We are using TravisCI for most of our Open Source work.  And we are on the brink of using recently announced BitBucket Pipelines for our private repositories (sorry Jenkins, we’ve tried you, but … not yet).  We’ve also jumped into ChatOps world with HipChat and its integrations, to the point that it’s difficult to imagine how could we have worked without it just a few month ago.  Codecov.io has also proved to be useful.
  • Projects, projects, projects – much like the previous year, we’ve completed a whole lot of projects (see some of our clients).  Some were simple and straightforward.  Others were complicated and challenging. And we have more of these in the pipelines.  Overall, we’ve learned how to do more with less.  Our productivity, technical expertise, and confidence grows day-to-day.  I hope we keep it up for years to come.
  • Website – one thing that we wanted to do for ages is to update our website.  Which we did, despite all the crazy things going on.  It’s not a complete redesign, but it’s a nice refreshment.  And we’ve also got our blog section, which I promised you last year.  All we need to do now is to use it more. ;)

There are a couple of major updates coming soon, but I am not at liberty to share them right now.  But they are very, very exciting – that’s all I can say today.  Keep an eye our blog – we’ll be definitely sharing.

As I said, it was quite an intense year, with lots of things going on everywhere.  There were tough times, and there were easy times.  There were challenges and there were accomplishments.  There were successes, and there were mistakes and failures.  But I wouldn’t have it any other way!

After two years, I am still excited about this company and about my job here.  (Which, looking at my career so far, is not something that happens often.)  I hope the next year will continue the adventure and by the end of it I’ll be able to proudly show you a few more things.

 

Setting up NAT on Amazon AWS

When it comes to Amazon AWS, there are a few options for configuring Network Address Translation (NAT).  Here is a brief overview.

NAT Gateway

NAT Gateway is a configuration very similar to Internet Gateway.  My understanding is that the only major difference between the NAT Gateway and the Internet Gateway is that you have the control over the external public IP address of the NAT Gateway.  That’ll be one of your allocated Elastic IPs (EIPs).  This option is the simplest out of the three that I considered.  If you need plain and simple NAT – than that’s a good one to go for.

NAT Instance

NAT Instance is a special purpose EC2 instance, which is configured to do NAT out of the box.  If you need anything on top of plain NAT (like load balancing, or detailed traffic monitoring, or firewalls), but don’t have enough confidence in your network and system administration skills, this is a good option to choose.

Custom Setup

If you are the Do It Yourself guy, this option is for you.   But it can get tricky.  Here are a few things that I went through, learnt and suffered through, so that you don’t have to (or future me, for that matter).

Let’s start from the beginning.  You’ve created your own Virtual Private Cloud (VPC).  In that cloud, you’ve created two subnets – Public and Private (I’ll use this for example, and will come back to what happens with more).  Both of these subnets use the same routing table with the Internet Gateway.  Now you’ve launched an EC2 instance into your Public subnet and assigned it a private IP address.  This will be your NAT instance.  You’ve also launched another instance into the Private subnet, which will be your test client.  So far so good.

This instance will be used for translating internal IP addresses from the Private subnet to the external public IP address.  So, we, obviously, need an external IP address.  Let’s allocate an Elastic IP and associate it with the EC2 instance.  Easy peasy.

Now, we’ll need to create another routing table, using our NAT instance as the default gateway.  Once created, this routing table should be associated with our Private subnet.  This will cause all the machines on that network to use the NAT instance for any external communications.

Let’s do a quick side track here – security.  There are three levels that you should keep in mind here:

  • Network ACLs.  These are Amazon AWS access control lists, which control the traffic allowed in and out of the networks (such as our Public and Private subnets).  If the Network ACL prevents certain traffic, you won’t be able to reach the host, irrelevant of the host security configuration.  So, for the sake of the example, let’s allow all traffic in and out of both the Public and Private networks.  You can adjust it once your NAT is working.
  • Security Groups.  These are Amazon AWS permissions which control what type of traffic is allowed in or out of the network interface.  This is slightly confusing for hosts with the single interface, but super useful for machines with multiple network interfaces, especially if those interfaces are transferred between instances.  Create a single Security Group (for now, you can adjust this later), which will allow any traffic in from your VPC range of IPs, and any outgoing traffic.  Assign this Security Group to both EC2 instances.
  • Host firewall.  Chances are, you are using a modern Linux distribution for your NAT host.  This means that there is probably an iptables service running with some default configuration, which might prevent certain access.  I’m not going to suggest to disable it, especially on the machine facing the public Internet.  But just keep it in mind, and at the very least allow the ICMP protocol, if not from everywhere, then at least from your VPC IP range.

Now, on to the actual NAT.  It is technically possible to setup and use NAT on the machine with the single network interface, but you’d probably be frowned upon by other system and network administrators.  Furthermore, it doesn’t seem to be possible on the Amazon AWS infrastructure.  I’m not 100% sure about that, but I’ve spent more time than I had to figure this out and I failed miserably.

The rest of the steps would greatly benefit from a bunch of screenshots and step-by-step click through guides, which I am too lazy to do.  You can use this manual, as a base, even though it covers a slightly different, more advanced setup.  Also, you might want to have a look at CentOS 7 instructions for NAT configuration, and the discussion on the differences between SNAT and MASQUERADE.

We’ll need a second network interface.  You can create a new Network Interface with the IP in your Private subnet and attach it to the NAT instance.  Here comes a word of caution:  there is a limit on how many network interfaces can be attached to EC2 instance.  This limit is based on the type of the instance.   So, if you want to use a t2.nano or t2.micro instance, for example, you’d be limited to only two interfaces.  That’s why I’ve used the example with two networks – to have a third interface added, you’d need a much bigger instance, like t2.medium. (Which is a total overkill for my purposes.)

Now that you’ve attached the second interface to your EC2 instance, we have a few things to do.  First, you need to disable “Source/Destination Check” on the second network interface.  You can do it in your AWS Console, or maybe even through the API (I haven’t gone that deep yet).

It is time to adjust the configuration of our EC2 instance.  I’ll assume CentOS 7 Linux distribution, but it’d be very easy to adjust to whatever other Linux you are running.

Firstly, we need to configure the second network interface.  The easiest way to do this is to copy /etc/sysconfig/network-scripts/ifcfg-eth0 file into /etc/sysconfig/network-scripts/ifcfg-eth1, and then edit the eth1 one file changing the DEVICE variable to “eth1“.  Before you restart your network service, also edit /etc/sysconfig/network file and add the following: GATEWAYDEV=eth0 .  This will tell the operating system to use the first network interface (eth0) as the gateway device.  Otherwise, it’ll be sending things into the Private network and things won’t work as you expect them.  Now, restart the network service and make sure that both network interfaces are there, with correct IPs and that your routes are fine.

Secondly, we need to tweak the kernel for the NAT job (sounds funny, doesn’t it?).  Edit your /etc/sysctl.conf file and make sure it has the following lines in it:

# Enable IP forwarding
net.ipv4.ip_forward=1
# Disable ICMP redirects
net.ipv4.conf.all.accept_redirects=0
net.ipv4.conf.all.send_redirects=0
net.ipv4.conf.eth0.accept_redirects=0
net.ipv4.conf.eth0.send_redirects=0
net.ipv4.conf.eth1.accept_redirects=0
net.ipv4.conf.eth1.send_redirects=0

Apply the changes with sysctl -p.

Thirdly, and lastly, configure iptables to perform the network address translation.  Edit /etc/sysconfig/iptables and make sure you have the following:

*nat
:PREROUTING ACCEPT [48509:2829006]
:INPUT ACCEPT [33058:1879130]
:OUTPUT ACCEPT [57243:3567265]
:POSTROUTING ACCEPT [55162:3389500]
-A POSTROUTING -s 10.0.0.0/16 -o eth0 -j MASQUERADE
COMMIT

Adjust the IP range from 10.0.0.0/16 to your VPC range or the network that you want to NAT.  Restart the iptables service and check that everything is hunky-dory:

  1. The NAT instance can ping a host on the Internet (like 8.8.8.8).
  2. The NAT instance can ping a host on the Private network.
  3. The host on the Private network can ping the NAT instance.
  4. The host on the Private network can ping a host on the Internet (like 8.8.8.8).

If all that works fine, don’t forget to adjust your Network ACLs, Security Groups, and iptables to whatever level of paranoia appropriate for your environment.  If something is still not working, check all of the above again, especially for security layers, IP addresses (I spent a coupe of hours trying to find the problem, when it was the IP address typo – 10.0.0/16 – not the most obvious of things), network masks, etc.

Hope this helps.

Exporting messages from Gmail with fetchmail and procmail

One of the projects that I am involved in has a requirement of importing all the historical emails from a number of Gmail accounts into another system.  It’s not the most challenging of tasks, but since I spent a bit of time on it, I figured I should blog it here too, just in case a similar need will arise in the future.

In my particular case, I need two different solutions.  One for exporting all of the messages from all folders of all Gmail accounts in question (Gmail for Work).  And the other is for exporting only the messages from the “Sent Mail” folder, which were sent on specific dates.

The solution that I derived is based on the classic tools for this purpose – fetchmail and procmail.  Fetchmail is awesome at fetching emails using all kinds of protocols.  Procmail is amazing at sorting, filtering, and otherwise processing the email messages.

So, here we go.  First of all, we need to tell fetchmail where to get the messages from.  I didn’t want to create to separate configurations for each of my tasks, so I left only the options common between them in the configuration file, and the rest I will be passing as command line arguments, depending on scenario.

Note that I’ve been running these tests from a dedicated environment, where I only had the root user.  You don’t have to run it as root – it’ll work as any other just fine.  Also, keep in mind that I used “/root/fetchmail-test/” folder for my test runs.  You might need to adjust the paths if you have it any different.

Here’s my fetchmail.rc file, which I used to test a single mailbox.  A new “poll” section will go into this file later, for each mailbox that I’ll need to export.

poll imap.gmail.com proto imap:
username "someuser@gmail.com" is root here
password "somepass"
fetchall
keep
ssl

If you are not root, you might need to adjust the second line, replacing “root” with your username. Also, for testing purposes, you can use “fetchlimit 1” instead of “fetchall“.

Now, we need two configuration files for procmail.  The first one is super simple – I’ll use this for simply pushing all downloaded messages into a single giant mbox file.  Here’s the procmail-all.rc:

VERBOSE=0
DEFAULT=/root/fetchmail-test/fetchmail.all.mbox

As you can see, it only defines the verbosity level and the default mailbox.  The second configuration file is a bit more complicated.  I’ll use it for the sent items only.  The sent items folder limit will be done with fetchmail.  But I want to do further is disregard all messages, which were not sent on a specific date.  Here is my procmail-sent.rc:

VERBOSE=0
DEFAULT=/dev/null
:0
* ^Date: .*28 Jul 2016.*|\
^Date: .*27 Jul 2016.*
/root/fetchmail-test/fetchmail.sent.mbox

Again, we have the verbosity level and the default mailbox to save messages to.  Since I want to disregard them unless they match a certain condition, I specify /dev/null.   Then, I specify my condition, which is simply a bunch of regular expressions for the Date header.  Usually, Date header is a not very reliable as different MUAs (Mail User Agents) use different formats, time zones, etc.  In this particular case test results seemed consistent (maybe Gmail fixes the header), and I didn’t have any other more reliable criteria to use.

As you can see, I use a very basic condition for date matching. So, if the Date header matches either “28 Jul 2016” or “27 Jul 2016“, the message is saved in the mbox file, rather than being thrown into the default mailbox.

Now, all I need is a way to tie fetchmail and procmail together, as well as provide some additional options.  For that I created the two one-liner shell scripts, just so that I won’t need to figure out the command line arguments if I look at this whole thing six month later.

Here is the check-all.sh script (multi-line for readability):

#!/bin/bash
fetchmail -f fetchmail.rc \
-r "[Gmail]/All Mail" \
--mda "procmail /root/fetchmail-test/procmail-all.rc"

and here is the check-sent.sh script (multi-line for readability):

#!/bin/bash
fetchmail -f fetchmail.rc \
-r "[Gmail]/Sent Mail" \
--mda "procmail /root/fetchmail-test/procmail-sent.rc"

If you run either one of these scripts, you’ll see the output similar to this:

$ ./check-all.sh
fetchmail: WARNING: Running as root is discouraged.
410 messages for someuser@gmail.comat imap.gmail.com (folder [Gmail]/All Mail).
reading message someuser@gmail.com@gmail-imap.l.google.com:1 of 410 (446 header octets) (222 body octets) not flushed
reading message someuser@gmail.com@gmail-imap.l.google.com:2 of 410 (869 header octets) (230 body octets) not flushed
reading message someuser@gmail.com@gmail-imap.l.google.com:3 of 410 (865 header octets) (230 body octets) not flushed
...

Here are a few resources that you might find helpful:

SugarCRM, RoundCube and Request Tracker integration on a single domain

In my years of working as a system administrator I’ve done some pretty complex setups and integration solutions, but I don’t think I’ve done anything as twisted as this one recently.  The setup is part of the large and complex client project, built on their infrastructure, with quite a few requirements and a whole array of limitations.  Several systems were integrated together, but in this particular post I’m focusing primarily on the SugarCRM, RoundCube and Request Tracker.  Also, I am not going to cover the integration to full extent – just the email related parts.

Continue reading “SugarCRM, RoundCube and Request Tracker integration on a single domain”

Upgrading Amazon EC2 instance type

By now everybody knows that one of the major benefits to using cloud services rather than hosting on your own hardware is the ease to scale quickly.  Many web applications and large companies benefit from this, but what about smaller customers?  How about a single server?

Well, today one of our web servers was experiencing some pick loads.  It hosts a whole array of small websites built with WordPress, CakePHP, and other popular tools.  There was no time to update all these projects to work with multiple web servers.  And even redeploying them to multiple individual servers would have taken a few hours.  Instead, we’ve decided to upgrade the server hardware.

Pause for a second and imagine the situation with your own server.  Or a dedicated hosting account for that matter.  So much to configure.  So much to backup and restore.  So much to test.

Here’s how to do it, if your projects are on the Amazon EC2 instance (our was also inside a virtual private cloud (VPC), but even if it wasn’t, the difference would be insignificant):

  1. Login to the Amazon AWS console.
  2. Navigate to the Amazon EC2 section.
  3. Click on Instances in the left sidebar.
  4. Click on the instance that you want to upgrade in the list of your instances.
  5. Click Actions -> Instance State -> Stop.
  6. Wait a few seconds for the instance to stop.  You can use the Refresh button to update the list.
  7. (While your instance is still selected in the list of instances:) Click Actions -> Instance Settings -> Change Instance Type.
  8. In the popup window that appeared, select an Instance Type that you want.
  9. Click Apply.
  10. Click Actions -> Instance State -> Start.
  11. Wait a few seconds for the instance to start.
  12. Enjoy!

The whole process literally takes under two minutes.  You get exactly the same configuration – hostname, IP addresses (both internal and external), mounted EBS volumes, all your OS configuration, etc.  It’s practically a reboot of your machine. But into a different hardware configuration (CPU/RAM).

Coincidentally, earlier this morning I had to pack up a rack-mountable server – screws, cables, dusty boxes, the whole shebang.  It’s been a while since I’ve done that last time.

Another day in the office #work #cables #sysadmin

A photo posted by Leonid Mamchenkov (@mamchenkov) on

But I can tell you that I much prefer clicking a few buttons and moving on with my day.  Maybe I’m just not the hardware type.

Hard working #selfie #me #work #office

A photo posted by Leonid Mamchenkov (@mamchenkov) on

 

WTF with Amazon and TCP

Here goes the story of me learning a few new swear words and pulling out nearly all my hair.  Grab a cup of coffee, this will take make a while to tell…

First of all, here is a diagram to make things a little bit more visual.

wtf

As you can see, we have an office network with NAT on the gateway.  We have an Amazon VPC with NAT on the bastion host.  And then there’s the rest of the Internet.

The setup is pretty straight forward.  There are no outgoing firewalls anywhere, no VLANs, no network equipment – all of the involved machines are a variety of Linux boxes.  The whole thing has been working fine for a while now.

A couple of weeks ago we had an issue with our ISP in the office.  The Internet connection was alive, but we were getting extremely high packet loss – around 80%.  The technician passed by, changed the cables, rebooted the ADSL modem, and we’ve also rebooted the gateway.  The problem was fixed, except for one annoying bit.  We could access all of the Internet just fine, except our Amazon VPC bastion host.  Here’s where it gets interesting.

Continue reading “WTF with Amazon and TCP”