Amazon Linux AMI : Let’s Encrypt : ImportError: No module named interface

Let’s Encrypt has only experimental support for the Amazon Linux AMI, so it’s kind of expected to have issues once in a while.   Here’s one I came across today:

# /opt/letsencrypt/certbot-auto renew
Creating virtual environment...
Installing Python packages...
Installation succeeded.
Traceback (most recent call last):
File "/root/.local/share/letsencrypt/bin/letsencrypt", line 7, in <module>
from certbot.main import main
File "/root/.local/share/letsencrypt/local/lib/python2.7/dist-packages/certbot/main.py", line 12, in <module>
import zope.component
File "/root/.local/share/letsencrypt/local/lib/python2.7/dist-packages/zope/component/__init__.py", line 16, in <module>
from zope.interface import Interface
ImportError: No module named interface

My first though was to install the system updates. It looks like something is off in the Python-land. But even after the “yum update” was done, the issue was still there. A quick Google search later, thanks to the this GitHub issue and this comment, the solution is the following:

pip install pip --upgrade
pip install virtualenv --upgrade
virtualenv -p /usr/bin/python27 venv27

Running the renewal of the certificates works as expected after this.

P.S.: I wish we had fewer package and dependency managers in the world…

Amazon RDS and Amazon Virtual Private Cloud (VPC)

Yesterday I helped a friend to figure out why he couldn’t connect to his Amazon RDS database inside the Amazon VPC (Virtual Private Cloud).  It was the second time someone asked me to help with the Amazon Web Services (AWS), and it was the first time I was actually helpful.  Yey!

While I do use quite a few of the Amazon Web Services, I don’t have any experience with the Amazon RDS yet, as I’m managing my own MySQL instances.  It was interesting to get my toes wet in the troubleshooting.

Here are a few things I’ve learned in the process.

Lesson #1: Amazon supports two different ways of accessing the RDS service.  Make sure you know which one you are using and act accordingly.

gs-vpc-network

If you run an Amazon RDS instance in the VPC, you’ll have to setup your networking and security access properly.  This page – Connecting to a DB Instance Running the MySQL Database Engine – will only be useful once everything else is taken care of.  It’s not your first and only manual to visit.

Lesson #2 (sort of obvious): Make sure that both your Network ACL and Security Groups allow all the necessary traffic in and out.  Double-check the IP addresses in the rules.  Make sure you are not using a proxy server, when looking up your external IP address on WhatIsMyIP.com or similar.

Lesson #3: Do not use ICMP traffic (ping and such) as a troubleshooting tool.  It looks like Amazon RDS won’t be ping-able even if you allow it in your firewalls.  Try with “telnet your-rds-end-point-server your-rds-end-point-port” (example: “telnet 1.2.3.4 3306” or with a real database client, like the command-line MySQL one.

Lesson #4: Make sure your routing is setup properly.  Check that the subnet in which your RDS instance resides has the correct routing table attached to it, and that the routing table has the default gateway (0.0.0.0/0) route configured to either the Internet Gateway or to some sort of NAT.  Chances are your subnet is only dealing with private IP range and has no way of sending traffic outside.

Lesson #5: When confused, disoriented, and stuck, assume it’s not Amazon’s fault.  Keep calm and troubleshoot like any other remote connection issue.  Double-check your assumptions.

There’s probably lesson 6 somewhere there, about contacting support or something along those lines.  But in this particular case it didn’t get to that.  Amazon AWS support is excellent though.  I had to deal with those guys twice in the last two-something years, and they were awesome.

Amazon Lightsail – virtual private servers made easy

Amazon announced a new service – Amazon Lightsail – virtual private servers made easy, starting at $5 per month.

pricing

This is basically a much simplified setup of a few of their services, such as Amazon EC2, Amazon EIP, Amazon AIM, Amazon EBS, Amazon Route 53, and a few others.  For those, who don’t want to figure out all the intricacies of the infrastructure setup, just pick a VPC, click a few buttons and be ready to go, whether you want a plain operating system, or an application (like WordPress) already installed.

It’s an interesting move into the lower level web and VPS hosting.  I don’t think all the hosting companies will survive this, but for those that will do, the changes are coming, I think.

WP-CLI v1.0.0 Released!

WP-CLI project – a command line interface to WordPressannounces the release of v1.0.0.  After 5 years of development, the tool is rock solid and stable (that is being affected to a degree by the frequent releases of the WordPress itself).

There are some new features and a tonne of improvements in this release, including, as the major version bump indicates, some backward compatibility breaking changes.

If you are involved with WordPress projects, this tool is an absolute must have!  Whether you are automating your deployments, doing some testing, or setting up and configuring WordPress instances on a variety of servers.  It will save you time and make you life much much easier.  Check it out!

(At work, we are using it as part of our project-template-wordpress setup,  which is our go-to repository for initializing the new WordPress-based projects.)

 

Magento database maintenance

If you are running a Magento-based website, make sure you add the database maintenance script to the cron.  For example, append this to the /etc/crontab:

# Magento log maintenance, as per
# https://docs.nexcess.net/article/how-to-perform-magento-database-maintenance.html
0 23 * * 0 root (cd /var/www/html/mysite.com && php -f shell/log.php clean)

Thanks to this page, obviously.  You’ll be surprised how much leaner your database will be, especially if you get any kind of traffic to the site.  Your database backups will also appreciate the trim.

 

S3 static site with SSL

s3-static-site

S3 static site with SSL and automatic deploys using Travis” is a goldmine of all those simple technologies tied into a single knot for an impressive result.  It has a bit of everything:

  • Jekyll – simple, blog-aware, static sites engine, for managing content.
  • GitHub – for version control of the site’s content and for triggering the deployment chain.
  • Travis CI – for testing changes, building and deploying a new version.
  • Amazon S3 – simple, cheap, web-enabled storage of static content.
  • Amazon CloudFront – simple, cheap, geographically-distributed content delivery network (CDN).
  • Amazon Route 53 – simple and cheap DNS hosting and domain management.
  • Amazon IAM – identity and access management for the Amazon Web Services (AWS).
  • Let’s Encrypt – free SSL/TLS certificate provider.

When put altogether, these bits allow one to have a fast (static content combined with HTTP 2 and top-level networking) and cheap (Jekyll, GitHub, Travis and Let’s Encrypt are free, with the rest of the services costing a few cents here and there) static website, with SSL and HTTP 2.

This is a classic example of how accessible and available is modern technology, if (and only if) you know what you are doing.

Don’t Build Private Clouds

Subbu Allamaraju says “Don’t Build Private Clouds“.  I agree with his rational.

need-for-private-cloud

There are very few enterprises in the planet right now that need to own, operate and automate data centers. Unless you’ve at least 200,000 servers in multiple locations, or you’re in specific technology industries like communications, networking, media delivery, power, etc, you shouldn’t be in the data center and private cloud business. If you’re below this threshold, you should be spending most of your time and effort in getting out of the data center and not on automating and improving your on-premise data center footprint.

His main three points are:

  1. Private cloud makes you procrastinate doings the right things.
  2. Private cloud cost models are misleading.
  3. Don’t underestimate on-premise data center influence on your organization’s culture.

 

Using Ansible to bootstrap an Amazon EC2 instance

This article – “Using Ansible to Bootstrap My Work Environment Part 4” is pure gold for anyone trying to figure out all the moving parts needed to automate the provisioning and configuration of the Amazon EC2 instance with Ansible.

Sure, some bits are easier than the other, but it takes time to go from one step to another.  In this article, you have everything you need, including the provisioning Ansible playbook and variables, cloud-init bits, and more.

I’ve printed and laminated my copy.  It’s on the wall now.  It will provide me with countless hours of joy during the upcoming Christmas season.

Automate OpenVPN client on CentOS 7

I need to setup OpenVPN client to start automatically on a CentOS 7 server for one of our recent projects at work.  I’m not well versed in VPN technology, but the majority of the time was spent on something that I didn’t expect.

I go the VPN configuration and all the necessary certificates from the client, installed OpenVPN and tried it out.  It seemed to work just fine.  But the setting it up to start automatically and without any human intervention took much longer than I though it would.

The first issue that I came across was the necessary input of username and password for the VPN connection to be established.  The solution to that is simple (thanks to this comment):

  1. Create a new text file (for example, /etc/openvpn/auth) with the username being the first line of the file, and the password being the second.  Don’t forget to limit the permissions to read-only by root.
  2. Add the following line to the VPN configuration file (assuming /etc/openvpn/client.conf): “auth-user-pass auth“.  Here, the second “auth” is the name of the file, relative to the VPN configuration.

With that, the manual startup of the VPN (openvpn client.conf) was working.

Now, how do we start the service automatically?  The old-school knowledge was suggesting “service openvpn start”.  But that fails due to openvpn being an uknown service.  Weird, right?

“rpm -ql openvpn” pointed to the direction of the systemd service (“systemctl start openvpn”).  But that failed too.  The name of the service was strangely looking too:

# rpm -ql openvpn | grep service
/usr/lib/systemd/system/openvpn@.service

A little (well, not that little after all) digging around, revealed something that I didn’t know.  Systemd services can be started with different configuration files.  In this case, you can run “systemctl start openvpn@foobar” to start the OpenVPN service using “foobar” configuration file, which should be in “/etc/openvpn/foobar.conf“.

What’s that config file and where do I get it from?  Well, the OpenVPN configuration sent from our client had a “account@host.ovpn” file, which is exactly what’s needed.  So, renaming “account@host.ovpn” to “client.conf” and moving it together with all the other certificate files into “/etc/openvpn” folder allowed me to do “systemctl start openvpn@client“.  All you need now is to make the service start automatically at boot time and you are done.