Setup Streaming Replication with repmgr and barman in PostgreSQL
repmgr is an open-source toolset and it is used to automate, enhance, and manage PostgreSQL streaming replication.
Natively setting up and managing streaming replication involves many manual steps which include:
1. Configuring replication parameters in both primary and each standby node.
2. Backing up primary node data with pg_basebackup from each standby node and restoring it there.
3. Restarting the standby node(s).
From an operational side, a few tasks include:
1. Checking replication status using SQL statements.
2. Promoting a standby node when a switchover is necessary or when the primary is unavailable.
3. Recreating replication from the new read/write node to existing or new standby nodes.
From repmgr 3.2 we can integrate barman with repmgr and thus we can clone a standby from a Barman backup, This means the PostgreSQL Server is not subjected to the I/O load caused by a backup, and there’s no need to manage WAL retention on the database server other than to ensure WALs are archived to Barman. The standby can also use the Barman archive as a fallback source of WAL files in case streaming replication is interrupted.
In the recent two blogs, we discuss How to set up Streaming Replication Click here to read more. And How to Set up Traditional Barman Setup With WAL Streaming Click here to read more. In this blog, we will demonstrate How we can integrate barman with repmgr and setup Streaming replication in PostgreSQL.
Below are the high-level steps we will follow to set up our replication
1. repmgr Installation
2. repmgr Setup:
2.1: repmgr user and a repmgr database,
2.2: passwordless ssh setup,
2.3: repmgr configuration,
2.4: pg_hba.conf & postgresql.conf.
3. Node registration:
3.1: Master Server registration,
3.2: Standby DB Clone,
3.3: Standby Server Registration.
4. Test Replication Setup.
5. Monitoring Replication Setup.
Below are setup details and the same will be used in this demonstration.
Sr No. | Hostname | IP | Role |
1 | test-machine02 | 192.168.114.176 | Master / Primary Server |
2 | test-machine01 | 192.168.114.177 | Standby / Secondary Server |
3 | test-machine01 | 192.168.114.177 | Barman Server (Backup Server) |
Step 1. repmgr Installation: repmgr needs to install on both Master & Standby Server. Use OS Command yum install.
Master Server
[root@test-machine02 ~]# yum install repmgr_13
Loaded plugins: langpacks, ulninfo
Resolving Dependencies
--> Running transaction check
---> Package repmgr_13.x86_64 0:5.2.1-1.rhel7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
==============================================================================================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================================================================================
Installing:
repmgr_13 x86_64 5.2.1-1.rhel7 pgdg13 284 k
Transaction Summary
==============================================================================================================================================================================================
Install 1 Package
Total download size: 284 k
Installed size: 1.0 M
Is this ok [y/d/N]: y
Downloading packages:
repmgr_13-5.2.1-1.rhel7.x86_64.rpm | 284 kB 00:00:01
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : repmgr_13-5.2.1-1.rhel7.x86_64 1/1
Verifying : repmgr_13-5.2.1-1.rhel7.x86_64 1/1
Installed:
repmgr_13.x86_64 0:5.2.1-1.rhel7
Complete!
[root@test-machine02 ~]#
Standby Server
[root@test-machine01 ~]# yum install repmgr_13
Loaded plugins: langpacks, ulninfo
Resolving Dependencies
--> Running transaction check
---> Package repmgr_13.x86_64 0:5.2.1-1.rhel7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
==============================================================================================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================================================================================
Installing:
repmgr_13 x86_64 5.2.1-1.rhel7 pgdg13 284 k
Transaction Summary
==============================================================================================================================================================================================
Install 1 Package
Total download size: 284 k
Installed size: 1.0 M
Is this ok [y/d/N]: y
Downloading packages:
repmgr_13-5.2.1-1.rhel7.x86_64.rpm | 284 kB 00:00:01
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : repmgr_13-5.2.1-1.rhel7.x86_64 1/1
Verifying : repmgr_13-5.2.1-1.rhel7.x86_64 1/1
Installed:
repmgr_13.x86_64 0:5.2.1-1.rhel7
Complete!
[root@test-machine01 ~]#
Step 2.1. repmgr Setup – repmgr user and DB Creation: Create repmgr user and a repmgr database on Master Server PostgreSQL. repmgr uses its own database to store its metadata. Once user repmgr is created set search_path
Master Server
[root@test-machine02 ~]#
[root@test-machine02 ~]# su - postgres
Last login: Mon Aug 2 10:48:33 +03 2021 on pts/1
-bash-4.2$ psql
psql (13.2)
Type "help" for help.
postgres=#
postgres=# create user repmgr replication password 'repmgr';
CREATE ROLE
postgres=# create database repmgr owner repmgr;
CREATE DATABASE
postgres=#
postgres=#
postgres=# ALTER USER repmgr SET search_path TO repmgr, public;
ALTER ROLE
postgres=#
postgres=# \dt repmgr.*
List of relations
Schema | Name | Type | Owner
--------+--------------------+-------+----------
repmgr | events | table | postgres
repmgr | monitoring_history | table | postgres
repmgr | nodes | table | postgres
repmgr | voting_term | table | postgres
(4 rows)
postgres=#
Step 2.2. repmgr Setup – passwordless ssh setup: Ideally we should have Standby & BARMAN in the different servers but for demo purposes, we will have these two setups in the same server test-machine01. We need to set up a passwordless SSH connection between the Standby server and the barman server. We already configured passwordless SSH between barman server and Master Server during Traditional Barman Setup.
[root@test-machine01 ~]#
[root@test-machine01 ~]# su - barman
Last login: Mon Aug 1 10:52:46 +03 2021 on pts/0
-bash-4.2$
-bash-4.2$ hostname
test-machine01
-bash-4.2$ id
uid=994(barman) gid=990(barman) groups=990(barman) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
-bash-4.2$
-bash-4.2$ ssh-copy-id -i ~/.ssh/id_rsa.pub postgres@test-machine01
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/var/lib/barman/.ssh/id_rsa.pub"
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
postgres@test-machine01's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'postgres@test-machine01'"
and check to make sure that only the key(s) you wanted were added.
-bash-4.2$ ssh 'postgres@test-machine01'
Last login: Sat Jul 31 21:48:55 2021
-bash-4.2$ exit
logout
Connection to test-machine01 closed.
-bash-4.2$
[root@test-machine01 etc]#
[root@test-machine01 etc]# su - postgres
Last login: Sun Aug 1 18:05:30 +03 2021 from test-machine01 on pts/2
-bash-4.2$
-bash-4.2$ hostname
test-machine01
-bash-4.2$ id
uid=26(postgres) gid=26(postgres) groups=26(postgres) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
-bash-4.2$
-bash-4.2$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/var/lib/pgsql/.ssh/id_rsa):
Created directory '/var/lib/pgsql/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /var/lib/pgsql/.ssh/id_rsa.
Your public key has been saved in /var/lib/pgsql/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:jiouqVEgyZz+dmsHM+H0yzn4GgdhYnqBkbJ3Es/ralw postgres@test-machine02
The key's randomart image is:
+---[RSA 2048]----+
| .. |
|+o= |
|=* B o |
|+.= Bo. |
| +.+ooo S |
| .o E=.+ |
|.o = oB.+ |
|o.= o++* |
|o+oooo+.. |
+----[SHA256]-----+
-bash-4.2$
-bash-4.2$
-bash-4.2$ ssh-copy-id -i ~/.ssh/id_rsa.pub barman@test-machine01
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/var/lib/pgsql/.ssh/id_rsa.pub"
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
barman@test-machine01's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'barman@test-machine01'"
and check to make sure that only the key(s) you wanted were added.
-bash-4.2$
-bash-4.2$ ssh 'barman@test-machine01'
Last login: Sun Aug 1 18:08:49 2021
-bash-4.2$ exit
logout
Connection to test-machine01 closed.
-bash-4.2$
Step 2.3. repmgr Setup – repmgr configuration: Because both Master & Standby servers need to be aware of each other, on both database servers setup the repmgr configuration in /etc/repmgr/13/repmgr.conf.
Master Server: Setup below parameters in repmgr.conf. Please note we already had configured primary-db server configuration during barman configuration setup.
[root@test-machine02 ~]# cd /etc/repmgr/13/
[root@test-machine02 13]# cp repmgr.conf repmgr_bkp.conf
[root@test-machine02 13]# vi repmgr.conf
node_id=1
node_name='test-machine02'
conninfo='host=test-machine02 user=repmgr dbname=repmgr port=5432'
data_directory='/var/lib/pgsql/13/data/'
barman_host='barman@test-machine01'
barman_server='primary-db'
restore_command='/usr/bin/barman-wal-restore -U barman test-machine01 primary-db %f %p'
pg_bindir='/usr/pgsql-13/bin'
log_file='/var/log/repmgr.log'
:wq!
Standby Server: Setup below parameters in repmgr.conf.
[root@test-machine01 ~]# cd /etc/repmgr/13/
[root@test-machine01 13]# cp repmgr.conf repmgr_bkp.conf
[root@test-machine01 13]# vi repmgr.conf
node_id=2
node_name='test-machine01'
conninfo='host=test-machine01 user=repmgr dbname=repmgr port=5432'
data_directory='/var/lib/pgsql/13/data/'
barman_host='barman@test-machine01'
barman_server='primary-db'
restore_command='/usr/bin/barman-wal-restore -U barman test-machine01 primary-db %f %p'
pg_bindir='/usr/pgsql-13/bin'
log_file='/var/log/repmgr.log'
:wq!
Step 2.4. repmgr Setup – pg_hba.conf & postgresql.conf: Update parameter shared_preload_libraries = ‘repmgr’ in postgresql.conf and For the replication to work the Master Server needs to accept replication connection from Standby Server. So in pg_hba.conf file of the Master database, make sure the following or equivalent lines are present.
Master Server
[root@test-machine02 data]# pwd
/var/lib/pgsql/13/data
[root@test-machine02 data]# vi postgresql.conf
shared_preload_libraries = 'repmgr'
:wq!
[root@test-machine02 data]# vi pg_hba.conf
# IPv4 local connections:
host all all 192.168.114.0/24 trust
# replication privilege.
host replication all 192.168.114.0/24 trust
:wq!
[root@test-machine02 ~]#
[root@test-machine02 ~]# systemctl restart postgresql-13
[root@test-machine02 ~]#
Step 3.1. Node Registration – Master Server: Run the following command in the Master Server as the postgres user. This registers the primary node PostgreSQL instance with repmgr. This command installs the repmgr extension. It also adds metadata about the primary node in the repmgr database.
Master Server
[root@test-machine02 ~]# su - postgres
Last login: Mon Aug 2 22:28:46 +03 2021 on pts/0
-bash-4.2$
-bash-4.2$ id
uid=26(postgres) gid=26(postgres) groups=26(postgres) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
-bash-4.2$ /usr/pgsql-13/bin/repmgr -f /etc/repmgr/13/repmgr.conf primary register
INFO: connecting to primary database...
NOTICE: attempting to install extension "repmgr"
ERROR: "repmgr" is not a superuser and no superuser name supplied
HINT: supply a valid superuser name with -S/--superuser
-bash-4.2$
-bash-4.2$
-bash-4.2$ /usr/pgsql-13/bin/repmgr -f /etc/repmgr/13/repmgr.conf primary register -S postgres
WARNING: following problems with command line parameters detected:
--superuser ignored when executing PRIMARY REGISTER
INFO: connecting to primary database...
NOTICE: attempting to install extension "repmgr"
NOTICE: "repmgr" extension successfully installed
NOTICE: primary node record (ID: 1) registered
-bash-4.2$
-bash-4.2$
-bash-4.2$
-bash-4.2$ /usr/pgsql-13/bin/repmgr -f /etc/repmgr/13/repmgr.conf cluster show
ID | Name | Role | Status | Upstream | Location | Priority | Timeline | Connection string
----+----------------+---------+-----------+----------+----------+----------+----------+----------------------------------------------------------
1 | test-machine02 | primary | * running | | default | 100 | 5 | host=192.168.114.176 user=repmgr dbname=repmgr port=5432
-bash-4.2$
-bash-4.2$
Step 3.2. Node Registration – Standby Clone: Next, we will run the following command in the standby server as the postgres user for a dry-run before actually cloning from the primary. Once dry-run runs the actual clone command and starts the standby PostgreSQL cluster.
Standby Server
-bash-4.2$ id
uid=26(postgres) gid=26(postgres) groups=26(postgres) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
-bash-4.2$ hostname
test-machine01
-bash-4.2$
-bash-4.2$
-bash-4.2$ /usr/pgsql-13/bin/repmgr -h test-machine02 -U repmgr -d repmgr -f /etc/repmgr/13/repmgr.conf standby clone --dry-run
NOTICE: destination directory "/var/lib/pgsql/13/data" provided
INFO: connecting to Barman server to verify backup for "primary-db"
INFO: valid backup for server "primary-db" found in the Barman catalogue
INFO: connecting to Barman server to fetch server parameters
INFO: server parameters were successfully fetched from Barman server
INFO: connecting to source node
DETAIL: connection string is: host=test-machine02 user=repmgr dbname=repmgr
DETAIL: current installation size is 76 MB
INFO: "repmgr" extension is installed in database "repmgr"
WARNING: target data directory appears to be a PostgreSQL data directory
DETAIL: target data directory is "/var/lib/pgsql/13/data"
HINT: use -F/--force to overwrite the existing data directory
INFO: replication slot usage not requested; no replication slot will be set up for this standby
INFO: parameter "max_wal_senders" set to 10
NOTICE: standby will attach to upstream node 1
INFO: all prerequisites for "standby clone" are met
-bash-4.2$
-bash-4.2$ /usr/pgsql-13/bin/repmgr -h test-machine02 -U repmgr -d repmgr -f /etc/repmgr/13/repmgr.conf standby clone --force
NOTICE: destination directory "/var/lib/pgsql/13/data" provided
INFO: connecting to Barman server to verify backup for "primary-db"
WARNING: directory "/var/lib/pgsql/13/data" exists but is not empty
NOTICE: -F/--force provided - deleting existing data directory "/var/lib/pgsql/13/data"
INFO: creating directory "/var/lib/pgsql/13/data/repmgr"...
INFO: connecting to Barman server to fetch server parameters
INFO: connecting to source node
DETAIL: connection string is: host=test-machine02 user=repmgr dbname=repmgr
DETAIL: current installation size is 76 MB
INFO: replication slot usage not requested; no replication slot will be set up for this standby
NOTICE: retrieving backup from Barman...
receiving file list ...
1640 files to consider
PG_VERSION
3 100% 2.93kB/s 0:00:00 (xfr#1, to-chk=1639/1640)
backup_label
241 100% 235.35kB/s 0:00:00 (xfr#2, to-chk=1638/1640)
backup_label.old
226 100% 220.70kB/s 0:00:00 (xfr#3, to-chk=1637/1640)
backup_manifest
321,275 100% 38.30MB/s 0:00:00 (xfr#4, to-chk=1636/1640)
current_logfiles
30 100% 3.66kB/s 0:00:00 (xfr#5, to-chk=1635/1640)
old_postgresql.auto.conf
PG_13_202007201/16716/827
8,192 100% 29.20kB/s 0:00:00 (xfr#749, to-chk=3/755)
PG_13_202007201/16716/828
8,192 100% 29.20kB/s 0:00:00 (xfr#750, to-chk=2/755)
PG_13_202007201/16716/PG_VERSION
3 100% 0.01kB/s 0:00:00 (xfr#751, to-chk=1/755)
PG_13_202007201/16716/pg_filenode.map
512 100% 1.82kB/s 0:00:00 (xfr#752, to-chk=0/755)
WARNING: directory "/u01/pg2_tbls" exists but is not empty
NOTICE: standby clone (from Barman) complete
NOTICE: you can now start your PostgreSQL server
HINT: for example: pg_ctl -D /var/lib/pgsql/13/data start
HINT: after starting the server, you need to register this standby with "repmgr standby register"
-bash-4.2$
-bash-4.2$
-bash-4.2$
-bash-4.2$ /usr/pgsql-13/bin/pg_ctl -D /var/lib/pgsql/13/data start
waiting for server to start....2021-08-02 10:57:02.830 +03 [99998] LOG: redirecting log output to logging collector process
2021-08-02 10:57:02.830 +03 [99998] HINT: Future log output will appear in directory "log".
. done
server started
-bash-4.2$
Step 3.3. Node Registration – Standby Server: Run the following command in standby server as the postgres user to register it with repmgr.
Standby Server
-bash-4.2$
-bash-4.2$ /usr/pgsql-13/bin/repmgr -f /etc/repmgr/13/repmgr.conf standby register
INFO: connecting to local node "test-machine01" (ID: 2)
INFO: connecting to primary database
WARNING: --upstream-node-id not supplied, assuming upstream node is primary (node ID 1)
INFO: standby registration complete
NOTICE: standby node "test-machine01" (ID: 2) successfully registered
-bash-4.2$
Step 4. Test Replication Setup:
Master Server
postgres=#
postgres=# \connect sample1
You are now connected to database "sample1" as user "postgres".
sample1=#
sample1=# CREATE TABLE my_table_test2 (col1 varchar, col2 varchar);
CREATE TABLE
sample1=#
sample1=# INSERT INTO my_table_test2 VALUES ('row1', 'this is row 1'), ('row2', 'this is row 2');
INSERT 0 2
sample1=#
Standby Server
postgres=#
postgres=# \connect sample1
You are now connected to database "sample1" as user "postgres".
sample1=#
sample1=# select count(*) from my_table_test2;
count
-------
2
(1 row)
Step 5. Monitoring Replication Setup: We can run the following command from any of the nodes as the postgres user:
-bash-4.2$ id
uid=26(postgres) gid=26(postgres) groups=26(postgres) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
-bash-4.2$
-bash-4.2$ /usr/pgsql-13/bin/repmgr -f /etc/repmgr/13/repmgr.conf cluster show --compact
ID | Name | Role | Status | Upstream | Location | Prio. | TLI
----+----------------+---------+-----------+----------------+----------+-------+-----
1 | test-machine02 | primary | * running | | default | 100 | 5
2 | test-machine01 | standby | running | test-machine02 | default | 100 | 5
-bash-4.2$
-bash-4.2$
-bash-4.2$
-bash-4.2$ /usr/pgsql-13/bin/repmgr -f /etc/repmgr/13/repmgr.conf cluster show
ID | Name | Role | Status | Upstream | Location | Priority | Timeline | Connection string
----+----------------+---------+-----------+----------------+----------+----------+----------+----------------------------------------------------------
1 | test-machine02 | primary | * running | | default | 100 | 5 | host=192.168.114.176 user=repmgr dbname=repmgr port=5432
2 | test-machine01 | standby | running | test-machine02 | default | 100 | 5 | host=test-machine01 user=repmgr dbname=repmgr port=5432
-bash-4.2$
-bash-4.2$
-bash-4.2$ /usr/pgsql-13/bin/repmgr -f /etc/repmgr/13/repmgr.conf cluster matrix
INFO: connecting to database
Name | ID | 1 | 2
----------------+----+---+---
test-machine02 | 1 | * | *
test-machine01 | 2 | * | *
-bash-4.2$
-bash-4.2$
Reference: repmgr Documentation:
https://repmgr.org/docs/4.1/installation-packages.html#INSTALLATION-PACKAGES-REDHAT
https://repmgr.org/docs/4.1/configuration-file-settings.html
https://repmgr.org/docs/4.1/cloning-from-barman.html
This document is just for learning purpose and always validate in the LAB environment first before applying in the LIVE environment.
Hope so you like this article!
Please share your valuable feedback/comments/subscribe and follow us below and don’t forget to click on the bell icon to get the most recent update. Click here to understand more about our pursuit.
Recent articles
- Oracle Critical Database Patch ID for October 2024 along with enabled Download Link
- Oracle Critical Database Patch ID for July 2024 along with enabled Download Link
- Oracle Critical Database Patch ID for April 2024 along with enabled Download Link
- Oracle Critical Database Patch ID for January 2024 along with enabled Download Link
- Oracle Critical Database Patch ID for October 2023 along with enabled Download Link