Ansible : rolling upgrades or updates.

Making a change to live servers in production is something which has to be done with extreme care and planning. Several deployment types such as blue/green, canary, rolling update are in use today to minimize user impact. Ansible can be used to orchestrate a zero-downtime rolling change to a service.

A typical upgrade of an application, such as patching, might go like this –

  1. disable monitoring alerts for a node
  2. disable or pull out from load balancer
  3. make changes to server
  4. Reboot node
  5. wait for node to be UP and do sanity check
  6. put node back to load balancer
  7. turn on monitoring of node

Rinse and repeat.

Ansible would be a great choice in orchestrating above steps. Let us start with an inventory of web servers, a load balancer and a monitoring node with nagios –

[webservers]
web1.example.net
web2.example.net
web3.example.net
web4.example.net
web5.example.net

[balancer]
haproxy.example.net

[monitoring]
nagios.example.net

The web servers are running apache2, and we will patch apache and the kernel. For the patch to take effect, the servers need to be recycled. We will perform the patching one node at a time, wait for the node to be healthy and go to the next. The first portion of our playbook would be something like this –

---
- hosts: webservers
  serial: 1

  pre_tasks:
  - name: Stop apache service
    service: name=httpd state=stopped

  tasks:
  - name: update apache
    yum: name=httpd state=latest
  - name: Update Kernel
    yum: name=kernel state=latest
  - name: Reboot server
    shell: /sbin/reboot -r +1

  post_tasks:
  - name: Wait for webserver to come up
    wait_for: host={{ inventory_hostname }} port=80 state=started delay=65 timeout=300
    delegate_to: 127.0.0.1

I haven’t included the playbook tasks for disabling/enabling monitoring as well as removing/adding node to the load balancer. The procedures might differ depending on what type of monitoring system or load balancer technology you are using. In addition to this, the sanity check show is a simple port 80 probing, in reality a much more sophisticated validation can be done.

References –

http://docs.ansible.com/ansible/latest/playbooks_delegation.html

http://docs.ansible.com/ansible/latest/guide_rolling_upgrade.html