PostgreSQL HA with Patroni: Your Turn to Test Failure Scenarios - Percona Database Performance Blog (2023)

A couple of weeks ago, Jobin and I did a short presentation during Percona Live Online bearing a similar title as the one for this post: “PostgreSQL HA With Patroni: Looking at Failure Scenarios and How the Cluster Recovers From Them”. We deployed a 3-node PostgreSQL environment with some recycled hardware we had lying around and set ourselves at “breaking” it in different ways: by unplugging network and power cables, killing main processes, attempting to saturate processors. All of this while continuously writing and reading data from PostgreSQL. The idea was to see how Patroni would handle the failures and manage the cluster to continue delivering service. It was a fun demo!

We promised a follow-up post explaining how we set up the environment, so you could give it a try yourselves, and this is it. We hope you also have fun attempting to reproduce our small experiment, but mostly that you use it as an opportunity to learn how a PostgreSQL HA environment managed by Patroni works in practice: there is nothing like a hands-on lab for this!

Initial Setup

We recycled three 10-year old Intel Atom mini-computers for our experiment but you could use some virtual machines instead: even though you will miss the excitement of unplugging real cables, this can still be simulated with a VM. We installed the server version of Ubuntu 20.04 and configured them to know “each other” by hostname; here’s how the hosts file of the first node looked like:

Shell

1

2

3

4

5

$ cat /etc/hosts

127.0.0.1 localhost node1

192.168.1.11 node1

192.168.1.12 node2

192.168.1.13 node3

etcd

Patroni supports a myriad of systems for Distribution Configuration Store but etcd remains a popular choice. We installed the version available from the Ubuntu repository on all three nodes:

Shell

1

sudo apt-get install etcd

It is necessary to initialize the etcd cluster from one of the nodes and we did that from node1 using the following configuration file:

Shell

1

2

3

4

5

6

7

8

9

10

$ cat /etc/default/etcd

ETCD_NAME=node1

ETCD_INITIAL_CLUSTER="node1=http://192.168.1.11:2380"

ETCD_INITIAL_CLUSTER_TOKEN="devops_token"

ETCD_INITIAL_CLUSTER_STATE="new"

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.11:2380"

ETCD_DATA_DIR="/var/lib/etcd/postgresql"

ETCD_LISTEN_PEER_URLS="http://192.168.1.11:2380"

ETCD_LISTEN_CLIENT_URLS="http://192.168.1.11:2379,http://localhost:2379"

ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.11:2379"

Note how ETCD_INITIAL_CLUSTER_STATE is defined with “new”.

We then restarted the service:

Shell

1

sudo systemctl restart etcd

We can then move on to install etcd on node2. The configuration file follows the same structure as that of node1, except that we are adding node2 to an existing cluster so we should indicate the other node(s):

Shell

1

2

3

4

5

6

7

8

9

ETCD_NAME=node2

ETCD_INITIAL_CLUSTER="node1=http://192.168.1.11:2380,node2=http://192.168.1.12:2380"

ETCD_INITIAL_CLUSTER_TOKEN="devops_token"

ETCD_INITIAL_CLUSTER_STATE="existing"

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.12:2380"

ETCD_DATA_DIR="/var/lib/etcd/postgresql"

ETCD_LISTEN_PEER_URLS="http://192.168.1.12:2380"

ETCD_LISTEN_CLIENT_URLS="http://192.168.1.12:2379,http://localhost:2379"

ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.12:2379"

Before we restart the service, we need to formally add node2 to the etcd cluster by running the following command on node1:

Shell

1

sudo etcdctl member add node2 http://192.168.1.12:2380

We can then restart the etcd service on node2:

Shell

1

sudo systemctl restart etcd

The configuration file for node3 looks like this:

Shell

1

2

3

4

5

6

7

8

9

ETCD_NAME=node3

ETCD_INITIAL_CLUSTER="node1=http://192.168.1.11:2380,node2=http://192.168.1.12:2380,node3=http://192.168.1.13:2380"

ETCD_INITIAL_CLUSTER_TOKEN="devops_token"

ETCD_INITIAL_CLUSTER_STATE="existing"

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.13:2380"

ETCD_DATA_DIR="/var/lib/etcd/postgresql"

ETCD_LISTEN_PEER_URLS="http://192.168.1.13:2380"

ETCD_LISTEN_CLIENT_URLS="http://192.168.1.13:2379,http://localhost:2379"

ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.13:2379"

Remember we need to add node3 to the cluster by running the following command on node1:

Shell

1

sudo etcdctl member add node3 http://192.168.1.13:2380

before we can restart the service on node3:

Shell

1

sudo systemctl restart etcd

We can verify the cluster state to confirm it has been deployed successfully by running the following command from any of the nodes:

Shell

1

2

3

4

$ sudo etcdctl member list

2ed43136d81039b4: name=node3 peerURLs=http://192.168.1.13:2380 clientURLs=http://192.168.1.13:2379 isLeader=false

d571a1ada5a5afcf: name=node1 peerURLs=http://192.168.1.11:2380 clientURLs=http://192.168.1.11:2379 isLeader=true

ecec6c549ebb23bc: name=node2 peerURLs=http://192.168.1.12:2380 clientURLs=http://192.168.1.12:2379 isLeader=false

As we can see above, node1 is the leader at this point, which is expected since the etcd cluster has been bootstrapped from it. If you get a different result, check for etcd entries logged to /var/log/syslog on each node.

Watchdog

Quoting Patroni’s manual:

Watchdog devices are software or hardware mechanisms that will reset the whole system when they do not get a keepalive heartbeat within a specified timeframe. This adds an additional layer of fail safe in case usual Patroni split-brain protection mechanisms fail.

While the use of a watchdog mechanism with Patroni is optional, you shouldn’t really consider deploying a PostgreSQL HA environment in production without it.

For our tests, we used the standard software implementation for watchdog that is shipped with Ubuntu 20.04, a module called softdog. Here’s the procedure we used in all three nodes to configure the module to load:

Shell

1

sudo sh -c 'echo "softdog" >> /etc/modules'

Patroni will be the component interacting with the watchdog device. Since Patroni is run by the postgres user, we need to either set the permissions of the watchdog device open enough so the postgres user can write to it or make the device owned by postgres itself, which we consider a safer approach (as it is more restrictive):

Shell

1

sudo sh -c 'echo "KERNEL==\"watchdog\", OWNER=\"postgres\", GROUP=\"postgres\"" >> /etc/udev/rules.d/61-watchdog.rules'

These two steps looked like all that would be required for watchdog to work but to our surprise, the softdog module wasn’t loaded after restarting the servers. After spending quite some time digging around we figured the module was blacklisted by default and there was a strain file with such a directive still lingering around:

Shell

1

2

$ grep blacklist /lib/modprobe.d/* /etc/modprobe.d/* |grep softdog

/lib/modprobe.d/blacklist_linux_5.4.0-72-generic.conf:blacklist softdog

Editing that file in each of the nodes to remove the line above and restarting the servers did the trick:

Shell

1

2

$ lsmod | grep softdog

softdog 16384 0

Shell

1

2

3

$ ls -l /dev/watchdog*

crw-rw---- 1 postgres postgres 10, 130 May 21 21:30 /dev/watchdog

crw------- 1 root root 245, 0 May 21 21:30 /dev/watchdog0

PostgreSQL

Percona Distribution for PostgreSQL can be easily installed from the Percona Repository in a few easy steps:

Shell

1

2

3

4

5

6

sudo apt-get update -y; sudo apt-get install -y wget gnupg2 lsb-release curl

wget https://repo.percona.com/apt/percona-release_latest.generic_all.deb

sudo dpkg -i percona-release_latest.generic_all.deb

sudo apt-get update

sudo percona-release setup ppg-12

sudo apt-get install percona-postgresql-12

An important concept to understand in a PostgreSQL HA environment like this one is that PostgreSQL should not be started automatically by systemd during the server initialization: we should leave it to Patroni to fully manage it, including the process of starting and stopping the server. Thus, we should disable the service:

Shell

1

sudo systemctl disable postgresql

For our tests, we want to start with a fresh new PostgreSQL setup and let Patroni bootstrap the cluster, so we stop the server and remove the data directory that has been created as part of the PostgreSQL installation:

Shell

1

2

sudo systemctl stop postgresql

sudo rm -fr /var/lib/postgresql/12/main

These steps should be repeated in nodes 2 and 3 as well.

Patroni

The Percona Repository also includes a package for Patroni so with it already configured in the nodes we can install Patroni with a simple:

Shell

1

sudo apt-get install percona-patroni

Here’s the configuration file we have used for node1:

Shell

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

$ cat /etc/patroni/config.yml

scope: stampede

name: node1

restapi:

listen: 0.0.0.0:8008

connect_address: node1:8008

etcd:

host: node1:2379

bootstrap:

# this section will be written into Etcd:/<namespace>/<scope>/config after initializing new cluster

dcs:

ttl: 30

loop_wait: 10

retry_timeout: 10

maximum_lag_on_failover: 1048576

# master_start_timeout: 300

# synchronous_mode: false

postgresql:

use_pg_rewind: true

use_slots: true

parameters:

wal_level: replica

hot_standby: "on"

logging_collector: 'on'

max_wal_senders: 5

max_replication_slots: 5

wal_log_hints: "on"

#archive_mode: "on"

#archive_timeout: 600

#archive_command: "cp -f %p /home/postgres/archived/%f"

#recovery_conf:

#restore_command: cp /home/postgres/archived/%f %p

# some desired options for 'initdb'

initdb: # Note: It needs to be a list (some options need values, others are switches)

- encoding: UTF8

- data-checksums

pg_hba: # Add following lines to pg_hba.conf after running 'initdb'

- host replication replicator 192.168.1.1/24 md5

- host replication replicator 127.0.0.1/32 trust

- host all all 192.168.1.1/24 md5

- host all all 0.0.0.0/0 md5

# - hostssl all all 0.0.0.0/0 md5

# Additional script to be launched after initial cluster creation (will be passed the connection URL as parameter)

# post_init: /usr/local/bin/setup_cluster.sh

# Some additional users users which needs to be created after initializing new cluster

users:

admin:

password: admin

options:

- createrole

- createdb

postgresql:

listen: 0.0.0.0:5432

connect_address: node1:5432

data_dir: "/var/lib/postgresql/12/main"

bin_dir: "/usr/lib/postgresql/12/bin"

# config_dir:

pgpass: /tmp/pgpass0

authentication:

replication:

username: replicator

password: vagrant

superuser:

username: postgres

password: vagrant

parameters:

unix_socket_directories: '/var/run/postgresql'

watchdog:

mode: required # Allowed values: off, automatic, required

device: /dev/watchdog

safety_margin: 5

tags:

nofailover: false

noloadbalance: false

clonefrom: false

nosync: false

With the configuration file in place, and now that we already have the etcd cluster up, all that is required is to restart the Patroni service:

Shell

1

sudo systemctl restart patroni

When Patroni starts, it will take care of initializing PostgreSQL (because the service is not currently running and the data directory is empty) following the directives in the bootstrap section of Patroni’s configuration file. If everything went according to the plan, you should be able to connect to PostgreSQL using the credentials in the configuration file (password is vagrant):

Shell

1

2

3

4

5

$ psql -U postgres

psql (12.6 (Ubuntu 2:12.6-2.focal))

Type "help" for help.

postgres=#

Repeat the operation for installing Patroni on nodes 2 and 3: the only difference is that you will need to replace the references to node1 in the configuration file (there are four of them, shown in bold) with the respective node name.

You can also check the state of the Patroni cluster we just created with:

Shell

1

2

3

4

5

6

7

8

$ sudo patronictl -c /etc/patroni/config.yml list

+----------+--------+-------+--------+---------+----+-----------+

| Cluster | Member | Host | Role | State | TL | Lag in MB |

+----------+--------+-------+--------+---------+----+-----------+

| stampede | node1 | node1 | Leader | running | 2 | |

| stampede | node2 | node2 | | running | 2 | 0 |

| stampede | node3 | node3 | | running | 2 | 0 |

+----------+--------+-------+--------+---------+----+-----------+

node1 started the Patroni cluster so it was automatically made the leader – and thus the primary/master PostgreSQL server. Nodes 2 and 3 are configured as read replicas (as the hot_standby option was enabled in Patroni’s configuration file).

HAProxy

A common implementation of high availability in a PostgreSQL environment makes use of a proxy: instead of connecting directly to the database server, the application will be connecting to the proxy instead, which will forward the request to PostgreSQL. When HAproxy is used for this, it is also possible to route read requests to one or more replicas, for load balancing. However, this is not a transparent process: the application needs to be aware of this and split read-only from read-write traffic itself. With HAproxy, this is done by providing two different ports for the application to connect. We opted for the following setup:

  • Writes → 5000
  • Reads → 5001

HAproxy can be installed as an independent server (and you can have as many as you want) but it can also be installed on the application server or the database server itself – it is a light enough service. For our tests, we planned on using our own Linux workstations (which also run Ubuntu 20.04) to simulate application traffic so we installed HAproxy on them:

Shell

1

sudo apt-get install haproxy

With the software installed, we modified the main configuration file as follows:

Shell

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

$ cat /etc/haproxy/haproxy.cfg

global

maxconn 100

defaults

log global

mode tcp

retries 2

timeout client 30m

timeout connect 4s

timeout server 30m

timeout check 5s

listen stats

mode http

bind *:7000

stats enable

stats uri /

listen primary

bind *:5000

option httpchk OPTIONS /master

http-check expect status 200

default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions

server node1 node1:5432 maxconn 100 check port 8008

server node2 node2:5432 maxconn 100 check port 8008

server node3 node3:5432 maxconn 100 check port 8008

listen standbys

balance roundrobin

bind *:5001

option httpchk OPTIONS /replica

http-check expect status 200

default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions

server node1 node1:5432 maxconn 100 check port 8008

server node2 node2:5432 maxconn 100 check port 8008

server node3 node3:5432 maxconn 100 check port 8008

Note there are two sections: primary, using port 5000, and standbys, using port 5001. All three nodes are included in both sections: that’s because they are all potential candidates to be either primary or secondary. For HAproxy to know which role each node currently has, it will send an HTTP request to port 8008 of the node: Patroni will answer. Patroni provides a built-in REST API support for health check monitoring that integrates perfectly with HAproxy for this:

Shell

1

2

$ curl -s http://node1:8008

{"state": "running", "postmaster_start_time": "2021-05-24 14:50:11.707 UTC", "role": "master", "server_version": 120006, "cluster_unlocked": false, "xlog": {"location": 25615248}, "timeline": 1, "database_system_identifier": "6965869170583425899", "patroni": {"version": "1.6.4", "scope": "stampede"}}

We configured the standbys group to balance read-requests in a round-robin fashion, so each connection request (or reconnection) will alternate between the available replicas. We can test this in practice, let’s save the postgres user password in a file to facilitate the process:

Shell

1

2

3

echo "localhost:5000:postgres:postgres:vagrant" > ~/.pgpass

echo "localhost:5001:postgres:postgres:vagrant" >> ~/.pgpass

chmod 0600 ~/.pgpass

We can then execute two read-requests to verify the round-robin mechanism is working as intended:

Shell

1

2

$ psql -Upostgres -hlocalhost -p<strong>5001</strong> -t -c "select inet_server_addr()"

192.168.1.13

Shell

1

2

$ psql -Upostgres -hlocalhost -p<strong>5001</strong> -t -c "select inet_server_addr()"

192.168.1.12

as well as test the writer access:

Shell

1

2

$ psql -Upostgres -hlocalhost -p<strong>5000</strong> -t -c "select inet_server_addr()"

192.168.1.11

You can also check the state of HAproxy by visiting http://localhost:7000/ on your browser.

Workload

To best simulate a production environment to test our failure scenarios, we wanted to have continuous reads and writes to the database. We could have used a benchmark tool such as Sysbench or Pgbench but we were more interested in observing the switch of source server upon a server failure than load itself. Jobin wrote a simple Python script that is perfect for this, HAtester. As was the case with HAproxy, we run the script from our Linux workstation. Since it is a Python script, you need to have a PostgreSQL driver for Python installed to execute it:

Shell

1

2

3

sudo apt-get install python3-psycopg2

curl -LO https://raw.githubusercontent.com/jobinau/pgscripts/main/patroni/HAtester.py

chmod +x HAtester.py

Edit the script with the credentials to access the PostgreSQL servers (through HAproxy) if you are using different settings from ours. The only requirement for it to work is to have the target table created beforehand, so first connect to the postgres database (unless you are using a different target) in the Primary and run:

Shell

1

CREATE TABLE HATEST (TM TIMESTAMP);

You can then start two different sessions:

  1. One for writes:

    Shell

    1

    ./HAtester.py 5000

  2. One for reads:

    Shell

    1

    ./HAtester.py 5001

The idea is to observe what happens with database traffic when the environment experiences a failure; that is, how HAproxy will route reads and writes as Patroni adjusts the PostgreSQL cluster. You can continuously monitor Patroni from the point of view of the nodes by opening a session in each of them and running the following command:

Shell

1

sudo -u postgres watch patronictl -c /etc/patroni/config.yml list

To facilitate observability and better follow the changes in real-time, we used the terminal multiplexer Tmux to visualize all 5 sessions on the same screen:

PostgreSQL HA with Patroni: Your Turn to Test Failure Scenarios - Percona Database Performance Blog (1)

  • On the left side, we have one session open for each of the 3 nodes, continuously running:

    Shell

    1

    sudo -u postgres watch patronictl -c /etc/patroni/config.yml list


    It’s better to have the Patroni view for each node independently because when you start the failure tests you will lose connection to a part of the cluster.
  • On the right side, we are executing the HAtester.py script from our workstation:
    • Sending writes through port 5000:

      Shell

      1

      ./HAtester.py 5000

    • and reads through port 5001:

      Shell

      1

      ./HAtester.py 5001

A couple of notes on the execution of the HAtester.py script:

  • Pressing Ctrl+C will break the connection but the script will reconnect, this time to a different replica (in the case of reads) due to having the Standbys group on HAproxy configured with round-robin balancing.
  • When a switchover or failover takes place and the nodes are re-arranged in the cluster, you may temporarily see writes sent to a node that used to be a replica and was just promoted as primary and reads send to a node that used to be the primary and was demoted as secondary: that’s a limitation of the HAtester.py script but “by design”; we favored faster reconnections and minimal checks on the node’s role for demonstration purposes. On a production application, this part ought to be implemented differently.

Testing Failure Scenarios

The fun part starts now! We leave it to you to test and play around to see what happens with the PostgreSQL cluster in practice following a failure. We leave as suggestions the tests we did in our presentation. For each failure scenario, observe how the cluster re-adjusts itself and the impact on read and write traffic.

1) Loss of Network Communication

  • Unplug the network cable from one of the nodes (or simulate this condition in your VM):
    • First from a replica
    • Then from the primary
  • Unplug the network cable from one replica and the primary at the same time:
    • Does Patroni experience a split-brain situation?

2) Power Outage

  • Unplug the power cable from the primary
  • Wait until the cluster is re-adjusted then plug the power cable back and start the node

3) SEGFAULT

Simulate an OOM/crash by killing the postmaster process in one of the nodes with kill -9.

4) Killing Patroni

Remember that Patroni is managing PostgreSQL. What happens if the Patroni process (and not PostgreSQL) is killed?

5) CPU Saturation

Simulate CPU saturation with a benchmark tool such as Sysbench, for example:

Shell

1

sysbench cpu --threads=10 --time=0 run

This one is a bit tricky as the reads and writes are each single-threaded operation. You may need to decrease the priority of the HAtester.py processes with renice, and possibly increase that of Sysbench’s.

6) Manual Switchover

Patroni facilitates changes in the PostgreSQL hierarchy. Switchover operations can be scheduled, the command below is interactive and will prompt you with options:

Shell

1

sudo -u postgres patronictl -c /etc/patroni/config.yml switchover

Alternatively, you can be specific and tell Patroni exactly what to do:

Shell

1

sudo -u postgres patronictl -c /etc/patroni/config.yml switchover --master node1 --candidate node2 --force

We hope you had fun with this hands-on lab! If you have questions or comments, leave us a note in the comments section below!

Percona Distribution for PostgreSQL provides the best and most critical enterprise components from the open-source community, in a single distribution, designed and tested to work together.

Download Percona Distribution for PostgreSQL Today!

Related

Subscribe

Connect with

Login

4 Comments

Oldest

Newest Most Voted

Inline Feedbacks

View all comments

PostgreSQL HA with Patroni: Your Turn to Test Failure Scenarios - Percona Database Performance Blog (3)

6 months ago

Hello,

Thank you very much for the article.
I was curios, when watchdog (softdog) triggers restart do you receive info in the OS log that restart is triggered by the watchdog?
Something like “softdog: Initiating system reboot”…

I have executed few tests and there was not any info to distinguish when watchdog triggered restart.

Best regards,
Marko

Reply

Jobin Augustine

Editor

Reply to markosutic

2 months ago

The chance of watchdog has to intervene is very slim. because the moment Primary understands that its leader key is expired and it is not able renew, it degrades itself to standby. So I too never saw that happening in real world cases.
Probably, only possibility is to overload the server and make it unresponsive enough so that watchdog will have some job to do. But if the system is hanging, I don’t know whether we should expect a good kernel message.

Reply

PostgreSQL HA with Patroni: Your Turn to Test Failure Scenarios - Percona Database Performance Blog (5)

pradeep batham

2 months ago

Thanks for this wonderful blog, I am new to Postgres and i have not face any issue to configure PG HA and testing the failover.

Reply

Jobin Augustine

Editor

Reply to pradeep batham

2 months ago

Great to hear that you found the blog post useful. Yes, Patroni emerged as the best HA framework for PostgreSQL.

Reply

Back to Blog

Top Articles
Latest Posts
Article information

Author: Rubie Ullrich

Last Updated: 03/17/2023

Views: 6374

Rating: 4.1 / 5 (72 voted)

Reviews: 95% of readers found this page helpful

Author information

Name: Rubie Ullrich

Birthday: 1998-02-02

Address: 743 Stoltenberg Center, Genovevaville, NJ 59925-3119

Phone: +2202978377583

Job: Administration Engineer

Hobby: Surfing, Sailing, Listening to music, Web surfing, Kitesurfing, Geocaching, Backpacking

Introduction: My name is Rubie Ullrich, I am a enthusiastic, perfect, tender, vivacious, talented, famous, delightful person who loves writing and wants to share my knowledge and understanding with you.