postgresql-replication-using-salt

PostgreSQL Replication using Salt

In Radialpoint, we have started to use Salt for almost everything System Administration.
So why not do some DBA with Salt.
I started with my database of choice, Postgresql, and the first thing that should be done after every descent database installation: Replication.

So lets start.
Here is the plan, in Salt vocabulary, the overstate:

base:
  match: '*'

backup_master:
  match: 'G@roles:master'
  require:
    - base
  sls:
    - postgresql.backup

push_backup:
  match: 'G@roles:master'
  require:
    - backup_master
  function:
    cp.push:
      - "{{ pillar['backup_dir'] }}/replication_backup.tar.bz2"

restore_backup:
  match: 'G@roles:slave'
  require:
    - push_backup
  sls:
    - postgresql.restore

IMHO steps are pretty obvious: Do a state.highstate on every server, backup the DB on the master and send it to the Salt-Master, grab the backup from the Salt-Master and restore it on the slave.

Let’s go over each step:

postgresql/backup.sls

{{ pillar['backup_dir'] }}:
  file.directory:
    - user: postgres
    - group: postgres
    - mode: 700
    - makedirs: True

stop_postgresql:
  service.dead:
    - name: postgresql-9.3
    - require:
      - file: {{ pillar['backup_dir'] }}

backup_tar:
  cmd.run:
    - name: tar cjvf {{ pillar['backup_dir'] }}/replication_backup.tar.bz2 --exclude=postmaster.pid /var/lib/pgsql/9.3/data {{ pillar['dba']['archive_dir'] }}
    - require:
      - file: {{ pillar['backup_dir'] }}
      - service: stop_postgresql

start_postgresql:
  service.running:
    - name: postgresql-9.3
    - require:
      - cmd: backup_tar

You just need to define the ‘backup_dir’ pillar, which you have already seen in the overstate.
It creates the directory to hold the backup, stops the service, creates a tar archive and bring the service back up.

postgresql/restore.sls

stop_postgresql:
  service.dead:
    - name: postgresql-9.3
    - prereq:
      - cmd: restore_database_backup

{{ pillar['backup_dir'] }}:
  file.directory:
    - user: postgres
    - group: postgres
    - mode: 700
    - makedirs: True

{{ pillar['backup_dir'] }}/replication_backup.tar.bz2:
  file.managed:
    - source: salt://{{ salt["mine.get"]("G@env_name:"+grains['env_name']+" and G@roles:master", "grains.get", "compound").values()[0] }}{{ pillar['backup_dir'] }}/replication_backup.tar.bz2
    - require:
      - file: {{ pillar['backup_dir'] }}

restore_database_backup:
  cmd.wait:
    - name: tar xjvf {{ pillar['backup_dir'] }}/replication_backup.tar.bz2 --directory /
    - watch:
     - file: {{ pillar['backup_dir'] }}/replication_backup.tar.bz2

recovery.conf:
  file.managed:
    - name: /var/lib/pgsql/9.3/data/recovery.conf
    - source: salt://postgresql/recovery.conf
    - template: jinja
    - user: postgres
    - group: postgres
    - mode: 600
    - watch:
      - cmd: restore_database_backup

start_postgresql:
  service.running:
    - name: postgresql-9.3
    - watch:
      - cmd: restore_database_backup
      - file: recovery.conf

The only interesting part is getting replication_backup.tar.bz2 from the master using MinionFS. In fact, this is the reason why I have developed MinionFS.

By the way, if you were wondering about recovery.conf, here it is:

postgresql/recovery.conf

standby_mode          = 'on'
{% for ip_addr in salt["mine.get"]("G@env_name:"+grains['env_name']+" and G@roles:master", "network.ip_addrs", "compound").values() %}
primary_conninfo      = 'host={{ ip_addr[0] }} port=5432 user=postgres'
{% endfor %}