diff --git a/docs/css/design.css b/docs/css/design.css index 14f9728b6..94a0c2caa 100644 --- a/docs/css/design.css +++ b/docs/css/design.css @@ -269,6 +269,7 @@ vertical-align: baseline; padding: 0 0.2em 0.1em; border-radius: 0.15em; + white-space: pre-wrap; /* Ensure long lines wrap */ } .md-typeset .highlight code span, .md-typeset code, diff --git a/docs/solutions/ha-setup-apt.md b/docs/solutions/ha-setup-apt.md index 3870acf75..6349a565b 100644 --- a/docs/solutions/ha-setup-apt.md +++ b/docs/solutions/ha-setup-apt.md @@ -24,19 +24,21 @@ This guide provides instructions on how to set up a highly available PostgreSQL We recommend not to expose the hosts/nodes where Patroni / etcd / PostgreSQL are running to public networks due to security risks. Use Firewalls, Virtual networks, subnets or the like to protect the database hosts from any kind of attack. -## Initial setup +Configure every node. + +### Set up hostnames in the `/etc/hosts` file It’s not necessary to have name resolution, but it makes the whole setup more readable and less error prone. Here, instead of configuring a DNS, we use a local name resolution by updating the file /etc/hosts. By resolving their hostnames to their IP addresses, we make the nodes aware of each other’s names and allow their seamless communication. -1. Run the following command on each node. Change the node name to `node1`, `node2` and `node3` respectively: +=== "node1" - ```{.bash data-prompt="$"} - $ sudo hostnamectl set-hostname node1 - ``` + 1. Set up the hostname for the node -2. Modify the `/etc/hosts` file of each PostgreSQL node to include the hostnames and IP addresses of the remaining nodes. Add the following at the end of the `/etc/hosts` file on all nodes: + ```{.bash data-prompt="$"} + $ sudo hostnamectl set-hostname node1 + ``` - === "node1" + 2. Modify the `/etc/hosts` file to include the hostnames and IP addresses of the remaining nodes. Add the following at the end of the `/etc/hosts` file on all nodes: ```text hl_lines="3 4" # Cluster IP and names @@ -45,7 +47,15 @@ It’s not necessary to have name resolution, but it makes the whole setup more 10.104.0.3 node3 ``` - === "node2" +=== "node2" + + 1. Set up the hostname for the node + + ```{.bash data-prompt="$"} + $ sudo hostnamectl set-hostname node2 + ``` + + 2. Modify the `/etc/hosts` file to include the hostnames and IP addresses of the remaining nodes. Add the following at the end of the `/etc/hosts` file on all nodes: ```text hl_lines="2 4" # Cluster IP and names @@ -54,7 +64,15 @@ It’s not necessary to have name resolution, but it makes the whole setup more 10.104.0.3 node3 ``` - === "node3" +=== "node3" + + 1. Set up the hostname for the node + + ```{.bash data-prompt="$"} + $ sudo hostnamectl set-hostname node3 + ``` + + 2. Modify the `/etc/hosts` file to include the hostnames and IP addresses of the remaining nodes. Add the following at the end of the `/etc/hosts` file on all nodes: ```text hl_lines="2 3" # Cluster IP and names @@ -63,11 +81,17 @@ It’s not necessary to have name resolution, but it makes the whole setup more 10.104.0.3 node3 ``` - === "HAproxy-demo" +=== "HAproxy-demo" - The HAProxy instance should have the name resolution for all the three nodes in its `/etc/hosts` file. Add the following lines at the end of the file: + 1. Set up the hostname for the node + + ```{.bash data-prompt="$"} + $ sudo hostnamectl set-hostname HAProxy-demo + ``` - ```text hl_lines="4 5 6" + 2. Modify the `/etc/hosts` file. The HAProxy instance should have the name resolution for all the three nodes in its `/etc/hosts` file. Add the following lines at the end of the file: + + ```text hl_lines="3 4 5" # Cluster IP and names 10.104.0.6 HAProxy-demo 10.104.0.1 node1 @@ -75,22 +99,29 @@ It’s not necessary to have name resolution, but it makes the whole setup more 10.104.0.3 node3 ``` - ### Install the software Run the following commands on `node1`, `node2` and `node3`: 1. Install Percona Distribution for PostgreSQL - * [Install `percona-release` :octicons-link-external-16:](https://www.percona.com/doc/percona-repo-config/installing.html). + * Disable the upstream `postgresql-{{pgversion}}` package. - * Enable the repository: + * Install the `percona-release` repository management tool - ```{.bash data-prompt="$"} - $ sudo percona-release setup ppg14 - ``` + --8<-- "percona-release-apt.md" - * [Install Percona Distribution for PostgreSQL packages](../apt.md). + * Enable the repository + + ```{.bash data-prompt="$"} + $ sudo percona-release setup ppg{{pgversion}} + ``` + + * Install Percona Distribution for PostgreSQL package + + ```{.bash data-prompt="$"} + $ sudo apt install percona-postgresql-{{pgversion}} + ``` 2. Install some Python and auxiliary packages to help with Patroni and etcd @@ -123,141 +154,134 @@ Run the following commands on `node1`, `node2` and `node3`: ## Configure etcd distributed store -The distributed configuration store helps establish a consensus among nodes during a failover and will manage the configuration for the three PostgreSQL instances. Although Patroni can work with other distributed consensus stores (i.e., Zookeeper, Consul, etc.), the most commonly used one is `etcd`. - -This document provides configuration for etcd version 3.5.x. For how to configure etcd cluster with earlier versions of etcd, read the blog post by _Fernando Laudares Camargos_ and _Jobin Augustine_ [PostgreSQL HA with Patroni: Your Turn to Test Failure Scenarios :octicons-link-external-16:](https://www.percona.com/blog/postgresql-ha-with-patroni-your-turn-to-test-failure-scenarios/) - -If you [installed the software from tarballs](../tarball.md), check how you [enable etcd](../enable-extensions.md#etcd). - -The `etcd` cluster is first started in one node and then the subsequent nodes are added to the first node using the `add `command. +In our implementation we use etcd distributed configuration store. [Refresh your knowledge about etcd](high-availability.md#etcd). !!! note + + If you [installed the software from tarballs](../tarball.md), you must first [enable etcd](../enable-extensions.md#etcd) before configuring it. - Users with deeper understanding of how etcd works can configure and start all etcd nodes at a time and bootstrap the cluster using one of the following methods: - - * Static in the case when the IP addresses of the cluster nodes are known - * Discovery service - for cases when the IP addresses of the cluster are not known ahead of time. - - See the [How to configure etcd nodes simultaneously](../how-to.md#how-to-configure-etcd-nodes-simultaneously) section for details. - -### Configure `node1` - -1. Create the configuration file. You can edit the sample configuration file `/etc/etcd/etcd.conf.yaml` or create your own one. Replace the node name and IP address with the actual name and IP address of your node. - - ```yaml title="/etc/etcd/etcd.conf.yaml" - name: 'node1' - initial-cluster-token: PostgreSQL_HA_Cluster_1 - initial-cluster-state: new - initial-cluster: node1=http://10.104.0.1:2380 - data-dir: /var/lib/etcd - initial-advertise-peer-urls: http://10.104.0.1:2380 - listen-peer-urls: http://10.104.0.1:2380 - advertise-client-urls: http://10.104.0.1:2379 - listen-client-urls: http://10.104.0.1:2379 - ``` +To get started with `etcd` cluster, you need to bootstrap it. This means setting up the initial configuration and starting the etcd nodes so they can form a cluster. There are the following bootstrapping mechanisms: -2. Enable and start the `etcd` service to apply the changes on `node1`. +* Static in the case when the IP addresses of the cluster nodes are known +* Discovery service - for cases when the IP addresses of the cluster are not known ahead of time. + +Since we know the IP addresses of the nodes, we will use the static method. For using the discovery service, please refer to the [etcd documentation :octicons-external-link-16:](https://etcd.io/docs/v3.5/op-guide/clustering/#etcd-discovery){:target="_blank"}. + +We will configure and start all etcd nodes in parallel. This can be done either by modifying each node's configuration or using the command line options. Use the method that you prefer more. + +### Method 1. Modify the configuration file + +1. Create the etcd configuration file on every node. You can edit the sample configuration file `/etc/etcd/etcd.conf.yaml` or create your own one. Replace the node names and IP addresses with the actual names and IP addresses of your nodes. + + === "node1" + + ```yaml title="/etc/etcd/etcd.conf.yaml" + name: 'node1' + initial-cluster-token: PostgreSQL_HA_Cluster_1 + initial-cluster-state: new + initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380,node3=http://10.104.0.3:2380 + data-dir: /var/lib/etcd + initial-advertise-peer-urls: http://10.104.0.1:2380 + listen-peer-urls: http://10.104.0.1:2380 + advertise-client-urls: http://10.104.0.1:2379 + listen-client-urls: http://10.104.0.1:2379 + ``` + + === "node2" + + ```yaml title="/etc/etcd/etcd.conf.yaml" + name: 'node2' + initial-cluster-token: PostgreSQL_HA_Cluster_1 + initial-cluster-state: new + initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380, node3=http://10.104.0.3:2380 + data-dir: /var/lib/etcd + initial-advertise-peer-urls: http://10.104.0.2:2380 + listen-peer-urls: http://10.104.0.2:2380 + advertise-client-urls: http://10.104.0.2:2379 + listen-client-urls: http://10.104.0.2:2379 + ``` + + === "node3" + + ```yaml title="/etc/etcd/etcd.conf.yaml" + name: 'node3' + initial-cluster-token: PostgreSQL_HA_Cluster_1 + initial-cluster-state: new + initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380, node3=http://10.104.0.3:2380 + data-dir: /var/lib/etcd + initial-advertise-peer-urls: http://10.104.0.3:2380 + listen-peer-urls: http://10.104.0.3:2380 + advertise-client-urls: http://10.104.0.3:2379 + listen-client-urls: http://10.104.0.3:2379 + ``` + +2. Enable and start the `etcd` service on all nodes: ```{.bash data-prompt="$"} $ sudo systemctl enable --now etcd $ sudo systemctl status etcd ``` -3. Check the etcd cluster members on `node1`: + During the node start, etcd searches for other cluster nodes defined in the configuration. If the other nodes are not yet running, the start may fail by a quorum timeout. This is expected behavior. Try starting all nodes again at the same time for the etcd cluster to be created. - ```{.bash data-prompt="$"} - $ sudo etcdctl member list - ``` - - ??? example "Sample output" +--8<-- "check-etcd.md" - ```{.text .no-copy} - 21d50d7f768f153a: name=default peerURLs=http://10.104.0.1:2380 clientURLs=http:// 10.104.0.1:2379 isLeader=true - ``` - -4. Add the `node2` to the cluster. Run the following command on `node1`: - - ```{.bash data-promp="$"} - $ sudo etcdctl member add node2 http://10.104.0.2:2380 - $ sudo etcdctl member add node3 http://10.104.0.8:2380 - ``` - - ??? example "Sample output" - - ```{.text .no-copy} - Added member named node2 with ID 10042578c504d052 to cluster - - etcd_NAME="node2" - etcd_INITIAL_CLUSTER="node2=http://10.104.0.2:2380,node1=http://10.104.0.1:2380" - etcd_INITIAL_CLUSTER_STATE="existing" - ``` - -### Configure `node2` +### Method 2. Start etcd nodes with command line options -1. Create the configuration file. You can edit the sample configuration file `/etc/etcd/etcd.conf.yaml` or create your own one. Replace the node names and IP addresses with the actual names and IP addresses of your nodes. +1. On each etcd node, set the environment variables for the cluster members, the cluster token and state: - ```yaml title="/etc/etcd/etcd.conf.yaml" - name: 'node2' - initial-cluster-token: PostgreSQL_HA_Cluster_1 - initial-cluster-state: existing - initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380 - data-dir: /var/lib/etcd - initial-advertise-peer-urls: http://10.104.0.2:2380 - listen-peer-urls: http://10.104.0.2:2380 - advertise-client-urls: http://10.104.0.2:2379 - listen-client-urls: http://10.104.0.2:2379 ``` - -2. Enable and start the `etcd` service to apply the changes on `node2`: - - ```{.bash data-prompt="$"} - $ sudo systemctl enable --now etcd - $ sudo systemctl status etcd + TOKEN=PostgreSQL_HA_Cluster_1 + CLUSTER_STATE=new + NAME_1=node1 + NAME_2=node2 + NAME_3=node3 + HOST_1=10.104.0.1 + HOST_2=10.104.0.2 + HOST_3=10.104.0.3 + CLUSTER=${NAME_1}=http://${HOST_1}:2380,${NAME_2}=http://${HOST_2}:2380,${NAME_3}=http://${HOST_3}:2380 ``` -### Configure `node3` +2. Start each etcd node in parallel using the following command: -1. Add `node3` to the cluster. **Run the following command on `node1`** + === "node1" - ```{.bash data-prompt="$"} - $ sudo etcdctl member add node3 http://10.104.0.3:2380 - ``` - -2. On `node3`, create the configuration file. You can edit the sample configuration file `/etc/etcd/etcd.conf.yaml` or create your own one. Replace the node names and IP addresses with the actual names and IP addresses of your nodes. - - ```yaml title="/etc/etcd/etcd.conf.yaml" - name: 'node1' - initial-cluster-token: PostgreSQL_HA_Cluster_1 - initial-cluster-state: existing - initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:238node3=http://10.104.0.3:2380 - data-dir: /var/lib/etcd - initial-advertise-peer-urls: http://10.104.0.3:2380 - listen-peer-urls: http://10.104.0.3:2380 - advertise-client-urls: http://10.104.0.3:2379 - listen-client-urls: http://10.104.0.3:2379 - ``` + ```{.bash data-prompt="$"} + THIS_NAME=${NAME_1} + THIS_IP=${HOST_1} + etcd --data-dir=data.etcd --name ${THIS_NAME} \ + --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \ + --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \ + --initial-cluster ${CLUSTER} \ + --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN} + ``` -3. Enable and start the `etcd` service to apply the changes on `node3`. + === "node2" - ```{.bash data-prompt="$"} - $ sudo systemctl enable --now etcd - $ sudo systemctl status etcd - ``` - -4. Check the etcd cluster members. - - ```{.bash data-promp="$"} - $ sudo etcdctl member list - ``` + ```{.bash data-prompt="$"} + THIS_NAME=${NAME_2} + THIS_IP=${HOST_2} + etcd --data-dir=data.etcd --name ${THIS_NAME} \ + --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \ + --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \ + --initial-cluster ${CLUSTER} \ + --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN} + ``` - ??? example "Sample output" + === "node3" - ```{.text .no-copy} - 2d346bd3ae7f07c4: name=node2 peerURLs=http://10.104.0.2:2380 clientURLs=http://10.104. 0.2:2379 isLeader=false - 8bacb519ebdee8db: name=node3 peerURLs=http://10.104.0.3:2380 clientURLs=http://10.104. 0.3:2379 isLeader=false - c5f52ea2ade25e1b: name=node1 peerURLs=http://10.104.0.1:2380 clientURLs=http://10.104. 0.1:2379 isLeader=true + ```{.bash data-prompt="$"} + THIS_NAME=${NAME_3} + THIS_IP=${HOST_3} + etcd --data-dir=data.etcd --name ${THIS_NAME} \ + --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \ + --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \ + --initial-cluster ${CLUSTER} \ + --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN} ``` +--8<-- "check-etcd.md" + ## Configure Patroni Run the following commands on all nodes. You can do this in parallel: @@ -292,7 +316,7 @@ Run the following commands on all nodes. You can do this in parallel: SCOPE="cluster_1" ``` -2. Create the `/etc/patroni/patroni.yml` configuration file. The file holds the default configuration values for a PostgreSQL cluster and will reflect the current cluster setup. Add the following configuration for `node1`: +2. Use the following command to create the `/etc/patroni/patroni.yml` configuration file and add the following configuration for `node1`: ```bash echo " @@ -396,39 +420,39 @@ Run the following commands on all nodes. You can do this in parallel: 3. Check that the `systemd` unit file `patroni.service` is created in `/etc/systemd/system`. If it is created, skip this step. -3. Check that the systemd unit file `patroni.service` is created in `/etc/systemd/system`. If it is created, skip this step. +3. Check that the systemd unit file `percona-patroni.service` is created in `/etc/systemd/system`. If it is created, skip this step. - If it's **not created**, create it manually and specify the following contents within: + If it's **not created**, create it manually and specify the following contents within: - ```ini title="/etc/systemd/system/patroni.service" - [Unit] - Description=Runners to orchestrate a high-availability PostgreSQL - After=syslog.target network.target + ```ini title="/etc/systemd/system/percona-patroni.service" + [Unit] + Description=Runners to orchestrate a high-availability PostgreSQL + After=syslog.target network.target - [Service] - Type=simple + [Service] + Type=simple - User=postgres - Group=postgres + User=postgres + Group=postgres - # Start the patroni process - ExecStart=/bin/patroni /etc/patroni/patroni.yml + # Start the patroni process + ExecStart=/bin/patroni /etc/patroni/patroni.yml - # Send HUP to reload from patroni.yml - ExecReload=/bin/kill -s HUP $MAINPID + # Send HUP to reload from patroni.yml + ExecReload=/bin/kill -s HUP $MAINPID - # only kill the patroni process, not its children, so it will gracefully stop postgres - KillMode=process + # only kill the patroni process, not its children, so it will gracefully stop postgres + KillMode=process - # Give a reasonable amount of time for the server to start up/shut down - TimeoutSec=30 + # Give a reasonable amount of time for the server to start up/shut down + TimeoutSec=30 - # Do not restart the service if it crashes, we want to manually inspect database on failure - Restart=no + # Do not restart the service if it crashes, we want to manually inspect database on failure + Restart=no - [Install] - WantedBy=multi-user.target - ``` + [Install] + WantedBy=multi-user.target + ``` 4. Make `systemd` aware of the new service: @@ -436,7 +460,9 @@ Run the following commands on all nodes. You can do this in parallel: $ sudo systemctl daemon-reload ``` -5. Now it's time to start Patroni. You need the following commands on all nodes but not in parallel. Start with the `node1` first, wait for the service to come to live, and then proceed with the other nodes one-by-one, always waiting for them to sync with the primary node: +5. Repeat steps 1-4 on the remaining nodes. In the end you must have the configuration file and the systemd unit file created on every node. +6. Now it's time to start Patroni. You need the following commands on all nodes but not in parallel. Start with the `node1` first, wait for the service to come to live, and then proceed with the other nodes one-by-one, always waiting for them to sync with the primary node: + ```{.bash data-prompt="$"} $ sudo systemctl enable --now patroni @@ -445,7 +471,7 @@ Run the following commands on all nodes. You can do this in parallel: When Patroni starts, it initializes PostgreSQL (because the service is not currently running and the data directory is empty) following the directives in the bootstrap section of the configuration file. -6. Check the service to see if there are errors: +7. Check the service to see if there are errors: ```{.bash data-prompt="$"} $ sudo journalctl -fu patroni @@ -455,31 +481,22 @@ Run the following commands on all nodes. You can do this in parallel: Changing the patroni.yml file and restarting the service will not have any effect here because the bootstrap section specifies the configuration to apply when PostgreSQL is first started in the node. It will not repeat the process even if the Patroni configuration file is modified and the service is restarted. -7. Check the cluster: +8. Check the cluster. Run the following command on any node: ```{.bash data-prompt="$"} $ patronictl -c /etc/patroni/patroni.yml list $SCOPE ``` - The output on `node1` resembles the following: - - ```{.text .no-copy} - + Cluster: cluster_1 --+---------+---------+----+-----------+ - | Member | Host | Role | State | TL | Lag in MB | - +--------+-------------+---------+---------+----+-----------+ - | node-1 | 10.0.100.1 | Leader | running | 1 | | - +--------+-------------+---------+---------+----+-----------+ - ``` + The output resembles the following: - On the remaining nodes: - ```{.text .no-copy} - + Cluster: cluster_1 --+---------+---------+----+-----------+ - | Member | Host | Role | State | TL | Lag in MB | - +--------+-------------+---------+---------+----+-----------+ - | node-1 | 10.0.100.1 | Leader | running | 1 | | - | node-2 | 10.0.100.2 | Replica | running | 1 | 0 | - +--------+-------------+---------+---------+----+-----------+ + + Cluster: cluster_1 (7440127629342136675) -----+----+-------+ + | Member | Host | Role | State | TL | Lag in MB | + +--------+------------+---------+-----------+----+-----------+ + | node1 | 10.0.100.1 | Leader | running | 1 | | + | node2 | 10.0.100.2 | Replica | streaming | 1 | 0 | + | node3 | 10.0.100.3 | Replica | streaming | 1 | 0 | + +--------+------------+---------+-----------+----+-----------+ ``` If Patroni has started properly, you should be able to locally connect to a PostgreSQL node using the following command: @@ -491,8 +508,7 @@ $ sudo psql -U postgres The command output looks like the following: ``` -psql (14.10) - +psql ({{pgversion}}) Type "help" for help. postgres=# diff --git a/docs/solutions/ha-setup-yum.md b/docs/solutions/ha-setup-yum.md index ca824f8c1..c9cc720f7 100644 --- a/docs/solutions/ha-setup-yum.md +++ b/docs/solutions/ha-setup-yum.md @@ -29,15 +29,15 @@ This guide provides instructions on how to set up a highly available PostgreSQL It's not necessary to have name resolution, but it makes the whole setup more readable and less error prone. Here, instead of configuring a DNS, we use a local name resolution by updating the file `/etc/hosts`. By resolving their hostnames to their IP addresses, we make the nodes aware of each other's names and allow their seamless communication. -1. Run the following command on each node. Change the node name to `node1`, `node2` and `node3` respectively: +=== "node1" - ```{.bash data-prompt="$"} - $ sudo hostnamectl set-hostname node-1 - ``` + 1. Set up the hostname for the node -2. Modify the `/etc/hosts` file of each PostgreSQL node to include the hostnames and IP addresses of the remaining nodes. Add the following at the end of the `/etc/hosts` file on all nodes: + ```{.bash data-prompt="$"} + $ sudo hostnamectl set-hostname node1 + ``` - === "node1" + 2. Modify the `/etc/hosts` file to include the hostnames and IP addresses of the remaining nodes. Add the following at the end of the `/etc/hosts` file on all nodes: ```text hl_lines="3 4" # Cluster IP and names @@ -46,7 +46,15 @@ It's not necessary to have name resolution, but it makes the whole setup more re 10.104.0.3 node3 ``` - === "node2" +=== "node2" + + 1. Set up the hostname for the node + + ```{.bash data-prompt="$"} + $ sudo hostnamectl set-hostname node2 + ``` + + 2. Modify the `/etc/hosts` file to include the hostnames and IP addresses of the remaining nodes. Add the following at the end of the `/etc/hosts` file on all nodes: ```text hl_lines="2 4" # Cluster IP and names @@ -55,7 +63,15 @@ It's not necessary to have name resolution, but it makes the whole setup more re 10.104.0.3 node3 ``` - === "node3" +=== "node3" + + 1. Set up the hostname for the node + + ```{.bash data-prompt="$"} + $ sudo hostnamectl set-hostname node3 + ``` + + 2. Modify the `/etc/hosts` file to include the hostnames and IP addresses of the remaining nodes. Add the following at the end of the `/etc/hosts` file on all nodes: ```text hl_lines="2 3" # Cluster IP and names @@ -64,11 +80,17 @@ It's not necessary to have name resolution, but it makes the whole setup more re 10.104.0.3 node3 ``` - === "HAproxy-demo" +=== "HAproxy-demo" - The HAProxy instance should have the name resolution for all the three nodes in its `/etc/hosts` file. Add the following lines at the end of the file: + 1. Set up the hostname for the node + + ```{.bash data-prompt="$"} + $ sudo hostnamectl set-hostname HAProxy-demo + ``` - ```text hl_lines="4 5 6" + 2. Modify the `/etc/hosts` file. The HAProxy instance should have the name resolution for all the three nodes in its `/etc/hosts` file. Add the following lines at the end of the file: + + ```text hl_lines="3 4 5" # Cluster IP and names 10.104.0.6 HAProxy-demo 10.104.0.1 node1 @@ -78,16 +100,26 @@ It's not necessary to have name resolution, but it makes the whole setup more re ### Install the software -1. Install Percona Distribution for PostgreSQL on `node1`, `node2` and `node3` from Percona repository: +Run the following commands on `node1`, `node2` and `node3`: + +1. Install Percona Distribution for PostgreSQL: + + * Check the [platform specific notes](../yum.md#for-percona-distribution-for-postgresql-packages) + * Install the `percona-release` repository management tool + + --8<-- "percona-release-yum.md" - * [Install `percona-release` :octicons-link-external-16:](https://www.percona.com/doc/percona-repo-config/installing.html). * Enable the repository: ```{.bash data-prompt="$"} $ sudo percona-release setup ppg14 ``` - * [Install Percona Distribution for PostgreSQL packages](../yum.md). + * Install Percona Distribution for PostgreSQL package + + ```{.bash data-prompt="$"} + $ sudo apt install percona-postgresql{{pgversion}}-server + ``` !!! important @@ -116,139 +148,134 @@ It's not necessary to have name resolution, but it makes the whole setup more re ## Configure etcd distributed store -The distributed configuration store helps establish a consensus among nodes during a failover and will manage the configuration for the three PostgreSQL instances. Although Patroni can work with other distributed consensus stores (i.e., Zookeeper, Consul, etc.), the most commonly used one is `etcd`. - -This document provides configuration for etcd version 3.5.x. For how to configure etcd cluster with earlier versions of etcd, read the blog post by _Fernando Laudares Camargos_ and _Jobin Augustine_ [PostgreSQL HA with Patroni: Your Turn to Test Failure Scenarios](https://www.percona.com/blog/postgresql-ha-with-patroni-your-turn-to-test-failure-scenarios/). - -If you [installed the software from tarballs](../tarball.md), check how you [enable etcd](../enable-extensions.md#etcd). - -The `etcd` cluster is first started in one node and then the subsequent nodes are added to the first node using the `add` command. +In our implementation we use etcd distributed configuration store. [Refresh your knowledge about etcd](high-availability.md#etcd). !!! note + + If you [installed the software from tarballs](../tarball.md), you must first [enable etcd](../enable-extensions.md#etcd) before configuring it. - Users with deeper understanding of how etcd works can configure and start all etcd nodes at a time and bootstrap the cluster using one of the following methods: - - * Static in the case when the IP addresses of the cluster nodes are known - * Discovery service - for cases when the IP addresses of the cluster are not known ahead of time. - - See the [How to configure etcd nodes simultaneously](../how-to.md#how-to-configure-etcd-nodes-simultaneously) section for details. - -### Configure `node1` - -1. Create the configuration file. You can edit the sample configuration file `/etc/etcd/etcd.conf.yaml` or create your own one. Replace the node name and IP address with the actual name and IP address of your node. +To get started with `etcd` cluster, you need to bootstrap it. This means setting up the initial configuration and starting the etcd nodes so they can form a cluster. There are the following bootstrapping mechanisms: - ```yaml title="/etc/etcd/etcd.conf.yaml" - name: 'node1' - initial-cluster-token: PostgreSQL_HA_Cluster_1 - initial-cluster-state: new - initial-cluster: node1=http://10.104.0.1:2380 - data-dir: /var/lib/etcd - initial-advertise-peer-urls: http://10.104.0.1:2380 - listen-peer-urls: http://10.104.0.1:2380 - advertise-client-urls: http://10.104.0.1:2379 - listen-client-urls: http://10.104.0.1:2379 - ``` - -4. Start the `etcd` service to apply the changes on `node1`: +* Static in the case when the IP addresses of the cluster nodes are known +* Discovery service - for cases when the IP addresses of the cluster are not known ahead of time. + +Since we know the IP addresses of the nodes, we will use the static method. For using the discovery service, please refer to the [etcd documentation :octicons-external-link-16:](https://etcd.io/docs/v3.5/op-guide/clustering/#etcd-discovery){:target="_blank"}. + +We will configure and start all etcd nodes in parallel. This can be done either by modifying each node's configuration or using the command line options. Use the method that you prefer more. + +### Method 1. Modify the configuration file + +1. Create the etcd configuration file on every node. You can edit the sample configuration file `/etc/etcd/etcd.conf.yaml` or create your own one. Replace the node names and IP addresses with the actual names and IP addresses of your nodes. + + === "node1" + + ```yaml title="/etc/etcd/etcd.conf.yaml" + name: 'node1' + initial-cluster-token: PostgreSQL_HA_Cluster_1 + initial-cluster-state: new + initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380,node3=http://10.104.0.3:2380 + data-dir: /var/lib/etcd + initial-advertise-peer-urls: http://10.104.0.1:2380 + listen-peer-urls: http://10.104.0.1:2380 + advertise-client-urls: http://10.104.0.1:2379 + listen-client-urls: http://10.104.0.1:2379 + ``` + + === "node2" + + ```yaml title="/etc/etcd/etcd.conf.yaml" + name: 'node2' + initial-cluster-token: PostgreSQL_HA_Cluster_1 + initial-cluster-state: new + initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380, node3=http://10.104.0.3:2380 + data-dir: /var/lib/etcd + initial-advertise-peer-urls: http://10.104.0.2:2380 + listen-peer-urls: http://10.104.0.2:2380 + advertise-client-urls: http://10.104.0.2:2379 + listen-client-urls: http://10.104.0.2:2379 + ``` + + === "node3" + + ```yaml title="/etc/etcd/etcd.conf.yaml" + name: 'node3' + initial-cluster-token: PostgreSQL_HA_Cluster_1 + initial-cluster-state: new + initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380, node3=http://10.104.0.3:2380 + data-dir: /var/lib/etcd + initial-advertise-peer-urls: http://10.104.0.3:2380 + listen-peer-urls: http://10.104.0.3:2380 + advertise-client-urls: http://10.104.0.3:2379 + listen-client-urls: http://10.104.0.3:2379 + ``` + +2. Enable and start the `etcd` service on all nodes: ```{.bash data-prompt="$"} $ sudo systemctl enable --now etcd + $ sudo systemctl start etcd $ sudo systemctl status etcd ``` -5. Check the etcd cluster members on `node1`: - - ```{.bash data-prompt="$"} - $ sudo etcdctl member list - ``` + During the node start, etcd searches for other cluster nodes defined in the configuration. If the other nodes are not yet running, the start may fail by a quorum timeout. This is expected behavior. Try starting all nodes again at the same time for the etcd cluster to be created. - ??? example "Sample output" +--8<-- "check-etcd.md" - ```{.text .no-copy} - 21d50d7f768f153a: name=default peerURLs=http://10.104.0.5:2380 clientURLs=http://10. 104.0.5:2379 isLeader=true - ``` +### Method 2. Start etcd nodes with command line options -6. Add `node2` to the cluster. Run the following command on `node1`: - - ```{.bash data-prompt="$"} - $ sudo etcdctl member add node2 http://10.104.0.2:2380 - ``` +1. On each etcd node, set the environment variables for the cluster members, the cluster token and state: - ??? example "Sample output" - - ```{.text .no-copy} - Added member named node2 with ID 10042578c504d052 to cluster - - etcd_NAME="node2" - etcd_INITIAL_CLUSTER="node2=http://10.104.0.2:2380,node1=http://10.104.0.1:2380" - etcd_INITIAL_CLUSTER_STATE="existing" - ``` - -### Configure `node2` - -1. Create the configuration file. You can edit the sample configuration file `/etc/etcd/etcd.conf.yaml` or create your own one. Replace the node names and IP addresses with the actual names and IP addresses of your nodes. - - ```yaml title="/etc/etcd/etcd.conf.yaml" - name: 'node2' - initial-cluster-token: PostgreSQL_HA_Cluster_1 - initial-cluster-state: existing - initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380 - data-dir: /var/lib/etcd - initial-advertise-peer-urls: http://10.104.0.2:2380 - listen-peer-urls: http://10.104.0.2:2380 - advertise-client-urls: http://10.104.0.2:2379 - listen-client-urls: http://10.104.0.2:2379 ``` - -2. Start the `etcd` service to apply the changes on `node2`: - - ```{.bash data-prompt="$"} - $ sudo systemctl enable --now etcd - $ sudo systemctl status etcd + TOKEN=PostgreSQL_HA_Cluster_1 + CLUSTER_STATE=new + NAME_1=node1 + NAME_2=node2 + NAME_3=node3 + HOST_1=10.104.0.1 + HOST_2=10.104.0.2 + HOST_3=10.104.0.3 + CLUSTER=${NAME_1}=http://${HOST_1}:2380,${NAME_2}=http://${HOST_2}:2380,${NAME_3}=http://${HOST_3}:2380 ``` -### Configure `node3` +2. Start each etcd node in parallel using the following command: -1. Add `node3` to the cluster. **Run the following command on `node1`**: - - ```{.bash data-prompt="$"} - $ sudo etcdctl member add node3 http://10.104.0.3:2380 - ``` + === "node1" -2. On `node3`, create the configuration file. You can edit the sample configuration file `/etc/etcd/etcd.conf.yaml` or create your own one. Replace the node names and IP addresses with the actual names and IP addresses of your nodes: - - ```yaml title="/etc/etcd/etcd.conf.yaml" - name: 'node1' - initial-cluster-token: PostgreSQL_HA_Cluster_1 - initial-cluster-state: existing - initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380,node3=http://10.104.0.3:2380 - data-dir: /var/lib/etcd - initial-advertise-peer-urls: http://10.104.0.3:2380 - listen-peer-urls: http://10.104.0.3:2380 - advertise-client-urls: http://10.104.0.3:2379 - listen-client-urls: http://10.104.0.3:2379 - ``` + ```{.bash data-prompt="$"} + THIS_NAME=${NAME_1} + THIS_IP=${HOST_1} + etcd --data-dir=data.etcd --name ${THIS_NAME} \ + --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \ + --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \ + --initial-cluster ${CLUSTER} \ + --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN} + ``` -3. Start the `etcd` service to apply the changes. + === "node2" - ```{.bash data-prompt="$"} - $ sudo systemctl enable --now etcd - $ sudo systemctl status etcd - ``` + ```{.bash data-prompt="$"} + THIS_NAME=${NAME_2} + THIS_IP=${HOST_2} + etcd --data-dir=data.etcd --name ${THIS_NAME} \ + --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \ + --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \ + --initial-cluster ${CLUSTER} \ + --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN} + ``` -4. Check the etcd cluster members. - - ```{.bash data-prompt="$"} - $ sudo etcdctl member list - ``` + === "node3" - ??? example "Sample output" + ```{.bash data-prompt="$"} + THIS_NAME=${NAME_3} + THIS_IP=${HOST_3} + etcd --data-dir=data.etcd --name ${THIS_NAME} \ + --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \ + --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \ + --initial-cluster ${CLUSTER} \ + --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN} + ``` - ```{.text .no-copy} - 2d346bd3ae7f07c4: name=node2 peerURLs=http://10.104.0.2:2380 clientURLs=http://10.104.0.2:2379 isLeader=false - 8bacb519ebdee8db: name=node3 peerURLs=http://10.104.0.3:2380 clientURLs=http://10.104.0.3:2379 isLeader=false - c5f52ea2ade25e1b: name=node1 peerURLs=http://10.104.0.1:2380 clientURLs=http://10.104.0.1:2379 isLeader=true - ``` +--8<-- "check-etcd.md" ## Configure Patroni @@ -301,8 +328,8 @@ Run the following commands on all nodes. You can do this in parallel: $ sudo chmod 700 /data/pgsql ``` -3. Create the `/etc/patroni/patroni.yml` configuration file. Add the following configuration: - +3. Use the following command to create the `/etc/patroni/patroni.yml` configuration file and add the following configuration for `node1`: + ```bash echo " namespace: ${NAMESPACE} @@ -394,39 +421,39 @@ Run the following commands on all nodes. You can do this in parallel: " | sudo tee -a /etc/patroni/patroni.yml ``` -4. Check that the systemd unit file `patroni.service` is created in `/etc/systemd/system`. If it is created, skip this step. +4. Check that the systemd unit file `percona-patroni.service` is created in `/etc/systemd/system`. If it is created, skip this step. - If it's **not** created, create it manually and specify the following contents within: - - ```ini title="/etc/systemd/system/patroni.service" - [Unit] - Description=Runners to orchestrate a high-availability PostgreSQL - After=syslog.target network.target + If it's **not created**, create it manually and specify the following contents within: - [Service] - Type=simple + ```ini title="/etc/systemd/system/percona-patroni.service" + [Unit] + Description=Runners to orchestrate a high-availability PostgreSQL + After=syslog.target network.target - User=postgres - Group=postgres + [Service] + Type=simple - # Start the patroni process - ExecStart=/bin/patroni /etc/patroni/patroni.yml + User=postgres + Group=postgres - # Send HUP to reload from patroni.yml - ExecReload=/bin/kill -s HUP $MAINPID + # Start the patroni process + ExecStart=/bin/patroni /etc/patroni/patroni.yml - # only kill the patroni process, not its children, so it will gracefully stop postgres - KillMode=process + # Send HUP to reload from patroni.yml + ExecReload=/bin/kill -s HUP $MAINPID - # Give a reasonable amount of time for the server to start up/shut down - TimeoutSec=30 + # only kill the patroni process, not its children, so it will gracefully stop postgres + KillMode=process - # Do not restart the service if it crashes, we want to manually inspect database on failure - Restart=no + # Give a reasonable amount of time for the server to start up/shut down + TimeoutSec=30 - [Install] - WantedBy=multi-user.target - ``` + # Do not restart the service if it crashes, we want to manually inspect database on failure + Restart=no + + [Install] + WantedBy=multi-user.target + ``` 5. Make `systemd` aware of the new service: @@ -434,7 +461,8 @@ Run the following commands on all nodes. You can do this in parallel: $ sudo systemctl daemon-reload ``` -6. Now it's time to start Patroni. You need the following commands on all nodes but not in parallel. Start with the `node1` first, wait for the service to come to live, and then proceed with the other nodes one-by-one, always waiting for them to sync with the primary node: +6. Repeat steps 1-5 on the remaining nodes. In the end you must have the configuration file and the systemd unit file created on every node. +7. Now it's time to start Patroni. You need the following commands on all nodes but not in parallel. Start with the `node1` first, wait for the service to come to live, and then proceed with the other nodes one-by-one, always waiting for them to sync with the primary node: ```{.bash data-prompt="$"} $ sudo systemctl enable --now patroni @@ -443,7 +471,7 @@ Run the following commands on all nodes. You can do this in parallel: When Patroni starts, it initializes PostgreSQL (because the service is not currently running and the data directory is empty) following the directives in the bootstrap section of the configuration file. -7. Check the service to see if there are errors: +8. Check the service to see if there are errors: ```{.bash data-prompt="$"} $ sudo journalctl -fu patroni @@ -464,32 +492,23 @@ Run the following commands on all nodes. You can do this in parallel: postgres=# ``` -8. When all nodes are up and running, you can check the cluster status using the following command: +9. When all nodes are up and running, you can check the cluster status using the following command: - ```{.bash data-prompt="$"} - $ sudo patronictl -c /etc/patroni/patroni.yml list - ``` - - The output on `node1` resembles the following: - - ```{.text .no-copy} - + Cluster: cluster_1 --+---------+---------+----+-----------+ - | Member | Host | Role | State | TL | Lag in MB | - +--------+-------------+---------+---------+----+-----------+ - | node-1 | 10.0.100.1 | Leader | running | 1 | | - +--------+-------------+---------+---------+----+-----------+ - ``` - - On the remaining nodes: - - ```{.text .no-copy} - + Cluster: cluster_1 --+---------+---------+----+-----------+ - | Member | Host | Role | State | TL | Lag in MB | - +--------+-------------+---------+---------+----+-----------+ - | node-1 | 10.0.100.1 | Leader | running | 1 | | - | node-2 | 10.0.100.2 | Replica | running | 1 | 0 | - +--------+-------------+---------+---------+----+-----------+ - ``` + ```{.bash data-prompt="$"} + $ sudo patronictl -c /etc/patroni/patroni.yml list + ``` + + The output resembles the following: + + ```{.text .no-copy} + + Cluster: cluster_1 (7440127629342136675) -----+----+-------+ + | Member | Host | Role | State | TL | Lag in MB | + +--------+------------+---------+-----------+----+-----------+ + | node1 | 10.0.100.1 | Leader | running | 1 | | + | node2 | 10.0.100.2 | Replica | streaming | 1 | 0 | + | node3 | 10.0.100.3 | Replica | streaming | 1 | 0 | + +--------+------------+---------+-----------+----+-----------+ + ``` ## Configure HAProxy diff --git a/docs/solutions/high-availability.md b/docs/solutions/high-availability.md index 809cf130d..37c599a2a 100644 --- a/docs/solutions/high-availability.md +++ b/docs/solutions/high-availability.md @@ -38,7 +38,7 @@ There are several methods to achieve high availability in PostgreSQL. This solut ## Patroni -[Patroni :octicons-link-external-16:](https://patroni.readthedocs.io/en/latest/) is a template for you to create your own customized, high-availability solution using Python and - for maximum accessibility - a distributed configuration store like ZooKeeper, etcd, Consul or Kubernetes. +[Patroni :octicons-link-external-16:](https://patroni.readthedocs.io/en/latest/) is a Patroni is an open-source tool that helps to deploy, manage, and monitor highly available PostgreSQL clusters using physical streaming replication. Patroni relies on a distributed configuration store like ZooKeeper, etcd, Consul or Kubernetes to store the cluster configuration. ### Key benefits of Patroni: @@ -50,6 +50,21 @@ There are several methods to achieve high availability in PostgreSQL. This solut * Distributed consensus for every action and configuration. * Integration with Linux watchdog for avoiding split-brain syndrome. +## etcd + +As stated before, Patroni uses a distributed configuration store to store the cluster configuration, health and status.The most popular implementation of the distributed configuration store is etcd due to its simplicity, consistency and reliability. Etcd not only stores the cluster data, it also handles the election of a new primary node (a leader in ETCD terminology). + +etcd is deployed as a cluster for fault-tolerance. An etcd cluster needs a majority of nodes, a quorum, to agree on updates to the cluster state. + +The recommended approach is to deploy an odd-sized cluster (e.g. 3, 5 or 7 nodes). The odd number of nodes ensures that there is always a majority of nodes available to make decisions and keep the cluster running smoothly. This majority is crucial for maintaining consistency and availability, even if one node fails. For a cluster with n members, the majority is (n/2)+1. + +To better illustrate this concept, let's take an example of clusters with 3 nodes and 4 nodes. + +In a 3-node cluster, if one node fails, the remaining 2 nodes still form a majority (2 out of 3), and the cluster can continue to operate. + +In a 4-nodes cluster, if one node fails, there are only 3 nodes left, which is not enough to form a majority (3 out of 4). The cluster stops functioning. + +In this solution we use a 3-nodes etcd cluster that resides on the same hosts with PostgreSQL and Patroni. Though !!! admonition "See also" diff --git a/snippets/check-etcd.md b/snippets/check-etcd.md new file mode 100644 index 000000000..1bd516fd2 --- /dev/null +++ b/snippets/check-etcd.md @@ -0,0 +1,47 @@ +3. Check the etcd cluster members. Use `etcdctl` for this purpose. Ensure that `etcdctl` interacts with etcd using API version 3 and knows which nodes, or endpoints, to communicate with. For this, we will define the required information as environment variables. Run the following commands on one of the nodes: + + ``` + export ETCDCTL_API=3 + HOST_1=10.104.0.1 + HOST_2=10.104.0.2 + HOST_3=10.104.0.3 + ENDPOINTS=$HOST_1:2379,$HOST_2:2379,$HOST_3:2379 + ``` + +4. Now, list the cluster members and output the result as a table as follows: + + ```{.bash data-prompt="$"} + $ sudo etcdctl --endpoints=$ENDPOINTS -w table member list + ``` + + ??? example "Sample output" + + ``` + +------------------+---------+-------+------------------------+----------------------------+------------+ + | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | + +------------------+---------+-------+------------------------+----------------------------+------------+ + | 4788684035f976d3 | started | node2 | http://10.104.0.2:2380 | http://192.168.56.102:2379 | false | + | 67684e355c833ffa | started | node3 | http://10.104.0.3:2380 | http://192.168.56.103:2379 | false | + | 9d2e318af9306c67 | started | node1 | http://10.104.0.1:2380 | http://192.168.56.101:2379 | false | + +------------------+---------+-------+------------------------+----------------------------+------------+ + ``` + +5. To check what node is currently the leader, use the following command + + ```{.bash data-prompt="$"} + $ sudo etcdctl --endpoints=$ENDPOINTS -w table endpoint status + ``` + + ??? example "Sample output" + + ```{.text .no-copy} + +-----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ + | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | + +-----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ + | 10.104.0.1:2379 | 9d2e318af9306c67 | 3.5.16 | 20 kB | true | false | 2 | 10 | 10 | | + | 10.104.0.2:2379 | 4788684035f976d3 | 3.5.16 | 20 kB | false | false | 2 | 10 | 10 | | + | 10.104.0.3:2379 | 67684e355c833ffa | 3.5.16 | 20 kB | false | false | 2 | 10 | 10 | | + +-----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ + ``` + + \ No newline at end of file diff --git a/snippets/percona-release-apt.md b/snippets/percona-release-apt.md new file mode 100644 index 000000000..c3a80d194 --- /dev/null +++ b/snippets/percona-release-apt.md @@ -0,0 +1,24 @@ +1. Install the `curl` download utility if it's not installed already: + + ```{.bash data-prompt="$"} + $ sudo apt update + $ sudo apt install curl + ``` + +2. Download the `percona-release` repository package: + + ```{.bash data-prompt="$"} + $ curl -O https://repo.percona.com/apt/percona-release_latest.generic_all.deb + ``` + +3. Install the downloaded repository package and its dependencies using `apt`: + + ```{.bash data-prompt="$"} + $ sudo apt install gnupg2 lsb-release ./percona-release_latest.generic_all.deb + ``` + +4. Refresh the local cache to update the package information: + + ```{.bash data-prompt="$"} + $ sudo apt update + ``` \ No newline at end of file diff --git a/snippets/percona-release-yum.md b/snippets/percona-release-yum.md new file mode 100644 index 000000000..05d669385 --- /dev/null +++ b/snippets/percona-release-yum.md @@ -0,0 +1,5 @@ +Run the following command as the `root` user or with `sudo` privileges: + +```{.bash data-prompt="$"} +$ sudo yum install -y https://repo.percona.com/yum/percona-release-latest.noarch.rpm +``` \ No newline at end of file