Patroni

Overview

Describes setting up and running Patroni, an OpenSQL cluster manager.

Since Patroni needs to run/stop/restart PostgreSQL directly, the PostgreSQL-related parameter settings should be set as a subset of the Patroni configuration settings.

Environment configuration

Patroni has three different types of configurations.

Global Dynamic Configuration

  • These options are stored in the ETCD and applied to all cluster nodes.

  • Dynamic configuration can be set using the patronictl edit-config tool or the rest api.

  • Dynamic configuration changes are reflected asynchronously to all nodes.

Local Configuration File (patroni.yml)

  • Dynamic reload without restarting patroni after modifying patroni.yml

    • SIGHUP to Patroni process causes local config file to be reread

    • POST /reload REST-API

    • Use Spatronictl reload command

Environment Variables

Some local configuration parameters can be set and overridden with environment variables.

This manual describes setting up and running with a Local Configuration File of the three methods.

Local Configuration File

When the Patroni process is run, it reads the settings from the yaml file located in the path entered as a parameter. You can send a SIGHUP signal to the Patroni process or send a POST /reload request to the REST API server to reload the configuration file. The path to the default template configuration file is /etc/patroni/patroni.yml. Create a yml file in that path and modify the contents of the file to suit your configuration.

  • You can define meta information for Patroni clusters, etcd connection information, logging configuration, REST API server configuration, and PostgreSQL parameter information.

  • PostgreSQL parameters can be set in Local Configuration and Global Dynamic Configuration. If there are duplicate keys, the value in Local Configuration takes precedence.

  • You can define a bootstrap.dcs entry to set the initial configuration set for the Global Dynamic

    Configuration below.

scope: opensql
#namespace: /service/
name: postgresql0
postgresql:
  listen: 0.0.0.0:5432
  connect_address: 192.168.0.100:5432
  proxy_address: 127.0.0.1:6432  # The address of connection pool (e.g., pgbouncer) running next to Patroni/Postgres. Only for service discovery.
  #data_dir: data/postgresql0
  data_dir: /var/lib/pgsql/16/data
  bin_dir: /usr/pgsql-16/bin
#  config_dir:
  pgpass: /tmp/pgpass0
  authentication:
    replication:
      username: patroni_repl
      password: patroni_repl
    superuser:
      username: postgres
      password: zalando
    rewind:  # Has no effect on postgres 10 and lower
      username: patroni_rewind
      password: patroni_rewind
  pg_hba:
    - local all all trust
    - host replication patroni_repl 192.168.0.0/24 trust
    - host replication patroni_repl 127.0.0.1/32 trust
    - host all all 0.0.0.0/0 md5
    - host all barman 192.168.0.0/24 trust
    - host replication streaming_barman 192.168.0.0/24 trust
  parameters:
    log_line_prefix: '%m [%r] [%u] [%a]'
    archive_command: 'barman-wal-archive node4 pg %p'
    archive_mode: 'true'
    wal_level: 'replica'

Example of Configuration File

Create a patroni.yml file for each node as below,

Change the connect_address value in the REST API part and the hosts setting in etcd3 to the node- specific ip-address address.

In case of configuring a patroni cluster with the following node addresses

  • node1 172.176.0.2

  • node2 172.176.0.3

  • node3 172.176.0.4

# /etc/patroni/patroni.yml
scope: opensql
name: postgresql0
restapi:
  listen: 0.0.0.0:8008
  connect_address: 178.176.0.2:8008
etcd3:
  protocol: http
  hosts:
  - 178.176.0.2:2379
  - 178.176.0.3:2379
  - 178.176.0.4:2379
  bootstrap:
  # This section will be written into Etcd:/<namespace>/<scope>/config after initializing new cluster
  # and all other cluster members will use it as a `global configuration`.
  # WARNING! If you want to change any of the parameters that were set up
  # via `bootstrap.dcs` section, please use `patronictl edit-config`!
  dcs:
    ttl: 30
    loop_wait: 10
    retry_timeout: 10
    maximum_lag_on_failover: 1048576
    postgresql:
      use_pg_rewind: true
      parameters:
#        wal_level: hot_standby
#        hot_standby: "on"
        max_connections: 100
        max_worker_processes: 8
#        wal_keep_segments: 8
#        max_wal_senders: 10
#        max_replication_slots: 10
#        max_prepared_transactions: 0
#        max_locks_per_transaction: 64
#        wal_log_hints: "on"
#        track_commit_timestamp: "off"
#        archive_mode: "on"
#        archive_timeout: 1800s
#        archive_command: mkdir -p ../wal_archive && test ! -f ../wal_archive/%f && cp %p ../wal_archive/%f
#      recovery_conf:
#        restore_command: cp ../wal_archive/%f %p

  # some desired options for 'initdb'
  initdb:  # Note: It needs to be a list (some options need values, others are switches)
  - encoding: UTF8
  - data-checksums
  
  postgresql:
  listen: 0.0.0.0:5432
  proxy_address: 127.0.0.1:6432  # The address of connection pool (e.g.,openproxy) running next to Patroni/Postgres. Only for service discovery.
  data_dir: /home/postgres/data # 
#  config_dir:
  pgpass: /tmp/pgpass0
  authentication:
    replication:
      username: patroni_repl
      password: patroni_repl
    superuser:
      username: postgres
      password: postgres
    rewind:  # Has no effect on postgres 10 and lower
      username: patroni_rewind
      password: patroni_rewind
  pg_hba:
    - local all all trust
    - host replication replicator all md5
    - host all all all md5
  parameters:
    log_line_prefix: '%m [%r] [%u] [%a]'

tags:
    noloadbalance: false
    clonefrom: false
    nostream: false

Execute Patroni

After creating patroni.yml, execute the patroni process with the command below on each node as shown below. The path to a valid Configuration .yml file must be entered as an argument.

$ patroni /etc/patroni/patroni.yml

Check Cluster execution

Use the command,patronictl , to check the cluster is running normally.

As shown in the example below, the Leader is running and the Replicas are streaming, which is expected during the normal cluster boot state .

postgres@opensql3:~$ patronictl -c /etc/patroni/patroni.yml list
+ Cluster: cluster (7488929455988801618) ------+----+-----------+
| Member   | Host        | Role    | State     | TL | Lag in MB |
+----------+-------------+---------+-----------+----+-----------+
| patroni1 | 178.176.0.4 | Replica | streaming |  1 |         0 |
| patroni2 | 178.176.0.2 | Replica | streaming |  1 |         0 |
| patroni3 | 178.176.0.3 | Leader  | running   |  1 |           |
+----------+-------------+---------+-----------+----+-----------+

Last updated