From 496053cf14f496c4f351887864d838342e30a049 Mon Sep 17 00:00:00 2001 From: Anastasia Alexandrova Date: Thu, 15 Aug 2024 12:33:22 +0300 Subject: [PATCH] PG-955 Added etcd.service sample file (#638) PG-963 Updated Patroni config --- docs/enable-extensions.md | 31 +++++++++++++++++++++++++++---- docs/solutions/ha-setup-apt.md | 8 +++++++- docs/solutions/ha-setup-yum.md | 13 ++++++++----- 3 files changed, 42 insertions(+), 10 deletions(-) diff --git a/docs/enable-extensions.md b/docs/enable-extensions.md index 0d823d3f1..5e81f38f5 100644 --- a/docs/enable-extensions.md +++ b/docs/enable-extensions.md @@ -10,20 +10,43 @@ While setting up a high availability PostgreSQL cluster with Patroni, you will n - Patroni installed on every ``postresql`` node. -- Distributed Configuration Store (DCS). Patroni supports such DCSs as etcd, zookeeper, Kubernetes though [etcd](https://etcd.io/) is the most popular one. It is available within Percona Distribution for PostgreSQL for all supported operating systems. +- Distributed Configuration Store (DCS). Patroni supports such DCSs as etcd, zookeeper, Kubernetes though [etcd](https://etcd.io/) is the most popular one. It is available within Percona Distribution for PostgreSQL for all supported operating systems. - [HAProxy :octicons-link-external-16:](http://www.haproxy.org/). +If you install the software fom packages, all required dependencies and service unit files are included. If you [install the software from the tarballs](tarball.md), you must first enable `etcd`. See the steps in the [etcd](#etcd) section if this document. + See the configuration guidelines for [Debian and Ubuntu](solutions/ha-setup-apt.md) and [RHEL and CentOS](solutions/ha-setup-yum.md). -!!! important +## etcd + +The following steps apply if you [installed etcd from the tarballs](tarball.md). - To configure high-availability with [the software installed from the tarballs](tarball.md), install the Python client for `etcd` to resolve dependency issues. Use the following command: +1. Install the Python client for `etcd` to resolve dependency issues. Use the following command: ```{.bash data-prompt="$"} $ /opt/percona-python3/bin/pip3 install python-etcd ``` - + +2. Create the `etcd.service` file. This file allows `systemd` to start, stop, restart, and manage the `etcd` service. This includes handling dependencies, monitoring the service, and ensuring it runs as expected. + + ```ini title="/etc/systemd/system/etcd.service" + [Unit] + After=network.target + Description=etcd - highly-available key value store + + [Service] + LimitNOFILE=65536 + Restart=on-failure + Type=notify + ExecStart=/usr/bin/etcd --config-file /etc/etcd/etcd.conf.yaml + User=etcd + + [Install] + WantedBy=multi-user.target + ``` + + ## pgBadger diff --git a/docs/solutions/ha-setup-apt.md b/docs/solutions/ha-setup-apt.md index 533ca3ead..c3baa590a 100644 --- a/docs/solutions/ha-setup-apt.md +++ b/docs/solutions/ha-setup-apt.md @@ -127,6 +127,8 @@ The distributed configuration store provides a reliable way to store data that n This document provides configuration for etcd version 3.5.x. For how to configure etcd cluster with earlier versions of etcd, read the blog post by _Fernando Laudares Camargos_ and _Jobin Augustine_ [PostgreSQL HA with Patroni: Your Turn to Test Failure Scenarios](https://www.percona.com/blog/postgresql-ha-with-patroni-your-turn-to-test-failure-scenarios/) +If you [installed the software from tarballs](../tarball.md), check how you [enable etcd](../enable-extensions.md#etcd). + The `etcd` cluster is first started in one node and then the subsequent nodes are added to the first node using the `add `command. !!! note @@ -326,6 +328,10 @@ Run the following commands on all nodes. You can do this in parallel: max_replication_slots: 10 wal_log_hints: "on" logging_collector: 'on' + max_wal_size: '10GB' + archive_mode: "on" + archive_timeout: 600s + archive_command: "cp -f %p /home/postgres/archived/%f" # some desired options for 'initdb' initdb: # Note: It needs to be a list (some options need values, others are switches) @@ -357,7 +363,7 @@ Run the following commands on all nodes. You can do this in parallel: connect_address: ${NODE_IP}:5432 data_dir: ${DATA_DIR} bin_dir: ${PG_BIN_DIR} - pgpass: /tmp/pgpass + pgpass: /tmp/pgpass0 authentication: replication: username: replicator diff --git a/docs/solutions/ha-setup-yum.md b/docs/solutions/ha-setup-yum.md index 512e20c90..d88fe1595 100644 --- a/docs/solutions/ha-setup-yum.md +++ b/docs/solutions/ha-setup-yum.md @@ -118,7 +118,9 @@ It's not necessary to have name resolution, but it makes the whole setup more re The distributed configuration store provides a reliable way to store data that needs to be accessed by large scale distributed systems. The most popular implementation of the distributed configuration store is etcd. etcd is deployed as a cluster for fault-tolerance and requires an odd number of members (n/2+1) to agree on updates to the cluster state. An etcd cluster helps establish a consensus among nodes during a failover and manages the configuration for the three PostgreSQL instances. -This document provides configuration for etcd version 3.5.x. For how to configure etcd cluster with earlier versions of etcd, read the blog post by _Fernando Laudares Camargos_ and _Jobin Augustine_ [PostgreSQL HA with Patroni: Your Turn to Test Failure Scenarios](https://www.percona.com/blog/postgresql-ha-with-patroni-your-turn-to-test-failure-scenarios/) +This document provides configuration for etcd version 3.5.x. For how to configure etcd cluster with earlier versions of etcd, read the blog post by _Fernando Laudares Camargos_ and _Jobin Augustine_ [PostgreSQL HA with Patroni: Your Turn to Test Failure Scenarios](https://www.percona.com/blog/postgresql-ha-with-patroni-your-turn-to-test-failure-scenarios/). + +If you [installed the software from tarballs](../tarball.md), check how you [enable etcd](../enable-extensions.md#etcd). The `etcd` cluster is first started in one node and then the subsequent nodes are added to the first node using the `add `command. @@ -321,9 +323,6 @@ Run the following commands on all nodes. You can do this in parallel: loop_wait: 10 retry_timeout: 10 maximum_lag_on_failover: 1048576 - slots: - percona_cluster_1: - type: physical postgresql: use_pg_rewind: true @@ -336,6 +335,10 @@ Run the following commands on all nodes. You can do this in parallel: max_replication_slots: 10 wal_log_hints: "on" logging_collector: 'on' + max_wal_size: '10GB' + archive_mode: "on" + archive_timeout: 600s + archive_command: "cp -f %p /home/postgres/archived/%f" # some desired options for 'initdb' initdb: # Note: It needs to be a list (some options need values, others are switches) @@ -367,7 +370,7 @@ Run the following commands on all nodes. You can do this in parallel: connect_address: ${NODE_IP}:5432 data_dir: ${DATA_DIR} bin_dir: ${PG_BIN_DIR} - pgpass: /tmp/pgpass + pgpass: /tmp/pgpass0 authentication: replication: username: replicator