Skip to content

Commit

Permalink
Updated HA doc setup with new version of ETCD (#571)
Browse files Browse the repository at this point in the history
* Updated HA doc setup with new version of ETCD

* Replaced ETCD config with yaml file

* Added a How to set up all ETCD nodes simultaneously

* Updated Enable extensions page

modified:   docs/solutions/ha-setup-apt.md
modified:   docs/solutions/ha-setup-yum.md
  • Loading branch information
nastena1606 authored Jun 6, 2024
1 parent 8a655b7 commit 5dac28c
Show file tree
Hide file tree
Showing 5 changed files with 254 additions and 240 deletions.
8 changes: 1 addition & 7 deletions docs/enable-extensions.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,7 @@ While setting up a high availability PostgreSQL cluster with Patroni, you will n

- Patroni installed on every ``postresql`` node.

- Distributed Configuration Store (DCS). Patroni supports such DCSs as ETCD, zookeeper, Kubernetes though [ETCD](https://etcd.io/) is the most popular one. It is available upstream as DEB packages for Debian 10, 11, 12 and Ubuntu 20.04, 22.04.

For CentOS 8, RPM packages for ETCD is available within Percona Distribution for PostreSQL. You can install it using the following command:

```{.bash data-prompt="$"}
$ sudo yum install etcd python3-python-etcd
```
- Distributed Configuration Store (DCS). Patroni supports such DCSs as ETCD, zookeeper, Kubernetes though [ETCD](https://etcd.io/) is the most popular one. It is available within Percona Distribution for PostgreSQL for all supported operating systems.

- [HAProxy](http://www.haproxy.org/).

Expand Down
75 changes: 75 additions & 0 deletions docs/how-to.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
# How to

## How to configure ETCD nodes simultaneously

!!! note

We assume you have a deeper knowledge of how ETCD works. Otherwise, refer to the configuration where you add ETCD nodes one by one.

Instead of adding `etcd` nodes one by one, you can configure and start all nodes in parallel.

1. Create the ETCD configuration file on every node. You can edit the sample configuration file `/etc/etcd/etcd.conf.yaml` or create your own one. Replace the node names and IP addresses with the actual names and IP addresses of your nodes.

=== "node1"

```yaml title="/etc/etcd/etcd.conf.yaml"
name: 'node1'
initial-cluster-token: PostgreSQL_HA_Cluster_1
initial-cluster-state: new
initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380,node3=http://10.104.0.3:2380
data-dir: /var/lib/etcd
initial-advertise-peer-urls: http://10.104.0.1:2380
listen-peer-urls: http://10.104.0.1:2380
advertise-client-urls: http://10.104.0.1:2379
listen-client-urls: http://10.104.0.1:2379
```

=== "node2"

```yaml title="/etc/etcd/etcd.conf.yaml"
name: 'node2'
initial-cluster-token: PostgreSQL_HA_Cluster_1
initial-cluster-state: new
initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380, node3=http://10.104.0.3:2380
data-dir: /var/lib/etcd
initial-advertise-peer-urls: http://10.104.0.2:2380
listen-peer-urls: http://10.104.0.2:2380
advertise-client-urls: http://10.104.0.2:2379
listen-client-urls: http://10.104.0.2:2379
```

=== "node3"

```yaml title="/etc/etcd/etcd.conf.yaml"
name: 'node1'
initial-cluster-token: PostgreSQL_HA_Cluster_1
initial-cluster-state: new
initial-cluster: node1=http://10.104.0.1:2380,node2=http://10.104.0.2:2380, node3=http://10.104.0.3:2380
data-dir: /var/lib/etcd
initial-advertise-peer-urls: http://10.104.0.3:2380
listen-peer-urls: http://10.104.0.3:2380
advertise-client-urls: http://10.104.0.3:2379
listen-client-urls: http://10.104.0.3:2379
```

2. Enable and start the `etcd` service on all nodes:

```{.bash data-prompt="$"}
$ sudo systemctl enable --now etcd
```

During the node start, ETCD searches for other cluster nodes defined in the configuration. If the other nodes are not yet running, the start may fail by a quorum timeout. This is expected behavior. Try starting all nodes again at the same time for the ETCD cluster to be created.

3. Check the etcd cluster members. Connect to one of the nodes and run the following command:

```{.bash data-prompt="$"}
$ sudo etcdctl member list
```

The output resembles the following:

```
2d346bd3ae7f07c4: name=node2 peerURLs=http://10.104.0.2:2380 clientURLs=http://10.104.0.2:2379 isLeader=false
8bacb519ebdee8db: name=node3 peerURLs=http://10.104.0.3:2380 clientURLs=http://10.104.0.3:2379 isLeader=false
c5f52ea2ade25e1b: name=node1 peerURLs=http://10.104.0.1:2380 clientURLs=http://10.104.0.1:2379 isLeader=true
```
Loading

0 comments on commit 5dac28c

Please sign in to comment.