Red Hat Satellite

Here’s something I didn’t know about – Red Hat Satellite.  From the FAQ page:

Red Hat® Satellite is a system management solution that makes Red Hat infrastructure easier to deploy, scale, and manage across physical, virtual, and cloud environments. Red Hat Satellite enables users to provision, configure, and update systems to help ensure that they are running efficiently andsecurely, and remain compliant with relevant standards. By automating most tasks related to maintaining systems, Red Hat Satellite helps organizations increase efficiency, reduce operational costs, and enable IT to better respond to strategic business needs.

Now Red Hat’s acquisition of Ansible makes even more sense.  I guess, their satellite is looking for the galaxy.

Open Source software is so reassuring …

There’s nothing like working on a problem for a few days and getting to the reassuring code snippet like this:

sub PSGIApp {
    my $self = shift;

    # XXX: this is fucked
    require HTML::Mason::CGIHandler;
    require HTML::Mason::PSGIHandler::Streamy;
    my $h = RT::Interface::Web::Handler::NewHandler('HTML::Mason::PSGIHandler::Streamy');

    $self->InitSessionDir;

    my $mason = sub {
        my $env = shift;

        # mod_fastcgi starts with an empty %ENV, but provides it on each
        # request.  Pick it up and cache it during the first request.
        $ENV{PATH} //= $env->{PATH};

        # HTML::Mason::Utils::cgi_request_args uses $ENV{QUERY_STRING} to
        # determine if to call url_param or not
        # (see comments in HTML::Mason::Utils::cgi_request_args)
        $ENV{QUERY_STRING} = $env->{QUERY_STRING};

The first comment is misleading. It throws you off. Almost make you close the file and go somewhere else. But that’s just a little frustration from the last few days. The solution to my problem is here too… And that’s when the warm, cozy feeling I have for the Open Source Software kicks in.

P.S.: both the problem and the solution will be posted separately.

 

Support lesson to learn from Amazon AWS

I’ve said a million times how happy I am with Amazon AWS.  Today I also want to share a positive lesson to learn from their technical support.  It’s the second time I’ve contacted them over the last year and a half, and it’s the second time I am amazed at how good well it works.

In my experience, technical support departments usually rely on one primary communication channel – be that a telephone, an email, a ticketing system, or a live chat.  The other channels are often just routed or converted into the main one, or, even, completely ignored.  But each one of those has it’s benefits and side effects.

Telephone provides the most immediate connectivity, and a much valued option of the human interaction.  But the communication is verbal, often without the paper trail.  It makes it difficult to carbon copy (CC) people on the conversation or review exactly what has been said.  It is also very free form, unstructured.

Live chat is also free form and unstructured, but it’s written, so transcripts are easily available.  It also helps with the carbon copy, but only on the receiving end – supervisors or field experts can often be included in the conversation, but adding somebody from the requesting side is rarely supported.

Email makes it easy to carbon copy people on both ends.  It provides the paper trail, but often lacks the immediate response factor.  And it’s still unstructured, making it difficult to figure out what was requested, what has been discussed and whether or not there was any resolution.  (Have you ever been a part of a lengthy multi-lingual conversation about, what turned out to be, multiple issues in the same thread?)

Ticketing/support systems help to structure the conversation and make it follow a certain workflow.  But they often lack humanity and, much like emails, the immediate response.

Now, what Amazon AWS support has done is a beautiful combination of a ticketing system and a phone.  You start off with the ticketing system – login, create a new support case, providing all the necessary information, and optionally CC other people from a single short form.  The moment you submit it, the web page asks for your phone number.  Once entered, a phone call is placed immediately by the system, connecting you to the support engineer.  The engineer confirms a few case details and lets you know that the case is in progress and expected resolution time (I was asking to raise the limit of the Elastic IP addresses on the Virtual Private Cloud, and I was told it will be done in the next 15 to 30 minute.  And it was done in 10!).  I have also received two emails – one confirming the opening of the case, with all the requested details, and another one notifying me that the work has been done, providing quick information on how to follow up, in case I needed to.

Overall experience was very smooth, fast, to the point, and very effective.  I never got lost.  I never had to figure anything out.  And my problem was attended to and resolved immediately.

I only wish more companies provided this level of support.  I’ll sure try too – but it’s a bar set high.

 

 

Ansible safety net for DNS wildcard hosts

After using Ansible for only a week, I am deeply in love.  I am doing more and more with less and less, and that’s exactly how I want my automation.

Today I had to solve an interesting problem.  Ansible operates, based on the host and group inventory.  As I mentioned before, I am now always relying on FQDNs (fully qualified domain names) for my host names.  But what happens when DNS wildcards come into play with things like load balancers and reverse proxies  Consider an example:

  1. Nginx configured as reverse proxy on the machine proxy1.example.com with 10.0.0.10 IP address.
  2. DNS wildcard is in place: *.example.com 3600 IN CNAME proxy1.example.com.
  3. Ansible contains proxy1.example.com in host inventory and a playbook to setup the reverse proxy with Nginx.
  4. Ansible contains a few other hosts in inventory and a playbook to setup Nginx as a web server.
  5. Somebody adds a new host to inventory: another-web-server.example.com, without specifying any other host details, like ansible_ssh_host variable.  And he also forgets to update the DNS zone with a new A or CNAME record.

Now, Ansible play is executed for the web servers configuration.  All previously existing machines are fine.  But the new machine’s another-web-server.example.com host name resolves to proxy1.example.com, which is where Ansible connects and runs the Nginx setup, overwriting the existing configuration, triggering a service restart, and screwing up your life.  Just kidding, of course. :)  It’ll be trivial to find out what happened.  Fixing the Nginx isn’t too difficult either.  Especially if you have backups in place.  But it’s still better to avoid the whole mess altogether.

To help prevent these cases, I decided to create a new safety net role.  Given a variable like:

---
# Aliased IPs is a list of hosts, which can be reached in 
# multiple ways due to DNS wildcards. Both IPv4 and IPv6 
# can be used. The hostname value is the primary hostname 
# for the IP - any other inventory hostname having any of 
# these IPs will cause a failure in the play.
aliased_ips:
  "10.0.0.10": "proxy1.example.com"
  "192.168.0.10": "proxy1.example.com"

And the following code in the role’s tasks/main.yml:

---
- debug: msg="Safety net - before IPv4"

- name: Check all IPv4 addresses against aliased IPs
  fail: msg="DNS is not configured for host '{{ inventory_hostname}}'. It resolves to '{{ aliased_ips[ item.0 ] }}'."
  when: "('{{ item[0] }}' == '{{ item[1] }}') and ('{{ inventory_hostname }}' != '{{ aliased_ips[ item.0 ] }}')"
  with_nested:
    - "{{ aliased_ips | default({}) }}"
    - "{{ ansible_all_ipv4_addresses }}"

- debug: msg="Safety net - after IPv4 and before IPv6"

- name: Check all IPv6 addresses against aliased IPs
  fail: msg="DNS is not configured for host '{{ inventory_hostname}}'. It resolves to '{{ aliased_ips[ item.0 ] }}'."
  when: "('{{ item[0] }}' == '{{ item[1] }}') and ('{{ inventory_hostname }}' != '{{ aliased_ips[ item.0 ] }}')"
  with_nested:
    - "{{ aliased_ips | default({}) }}"
    - "{{ ansible_all_ipv6_addresses }}"

- debug: msg="Safety net - after IPv6"

the safety net is in place.  The first check will connect to the remote server, get the list of all configured IPv4 addresses, and then compare each one with each IP address in the aliased_ips variable.  For every matching pair, it will check if the remote server’s host name from the inventory file matches the host name from the aliased_ips value for the matched IP address.  If the host names match, it’ll continue.  If not – a failure in the play occurs (Ansible speak for thrown exception).  Other tasks will continue execution for other hosts, but nothing else will be done during this play run for this particular host.

The second check will do the same but with IPv6 addresses.  You can mix and match both IPv4 and IPv6 in the same aliased_ips variable.  And Ansible is smart enough to exclude the localhost IPs too, so things shouldn’t break too much.

I’ve tested the above and it seems to work well for me.

There is a tiny issue with elegance here though: host name to IP mappings are already configured in the DNS zone – duplicating this configuration in the aliased_ips variable seems annoying.  Personally, I don’t have that many reverse proxies and load balancers to handle, and they don’t change too often either, so I don’t mind.  Also, there is something about relying on DNS while trying to protect against DNS mis-configuration that rubs me the wrong way.  But if you are the adventurous type, have a look at the Ansible’s dig lookup, which you can use to fetch the IP addresses from the DNS server of your choice.

As always, if you see any potential issues with the above or know of a better way to solve it, please let me know.

SugarCRM, RoundCube and Request Tracker integration on a single domain

In my years of working as a system administrator I’ve done some pretty complex setups and integration solutions, but I don’t think I’ve done anything as twisted as this one recently.  The setup is part of the large and complex client project, built on their infrastructure, with quite a few requirements and a whole array of limitations.  Several systems were integrated together, but in this particular post I’m focusing primarily on the SugarCRM, RoundCube and Request Tracker.  Also, I am not going to cover the integration to full extent – just the email related parts.

Continue reading SugarCRM, RoundCube and Request Tracker integration on a single domain