-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DPE-5827] Set all nodes to synchronous replicas #672
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #672 +/- ##
=======================================
Coverage 71.82% 71.83%
=======================================
Files 13 13
Lines 3219 3220 +1
Branches 477 476 -1
=======================================
+ Hits 2312 2313 +1
Misses 791 791
Partials 116 116 ☔ View full report in Codecov by Sentry. |
@@ -846,7 +846,7 @@ def update_synchronous_node_count(self, units: int | None = None) -> None: | |||
with attempt: | |||
r = requests.patch( | |||
f"{self._patroni_url}/config", | |||
json={"synchronous_node_count": units // 2}, | |||
json={"synchronous_node_count": units - 1}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
REST calls should also use the new value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code LGTM, but why are we setting all nodes to be sync replicas? From the mentioned ticket it should come from a config value?
What was discussed was to switch the default behaviour to all sync and add a config for controlling the amount of syncs. The config needs a spec. I'll comment on the ticket to clarify. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool! Testing it, it's working fine in almost all scenarios.
The scenario where it's not working yet is the upgrade:
juju deploy postgresql --channel 14/stable -n 3
# Wait for the units to settle down, then:
curl UNIT-ZERO-IP:8008/cluster | jq # There is one leader, one sync-standby and one replica, as expected.
juju run postgresql/leader pre-upgrade-check
juju refresh --path ./*.charm postgresql
# Wait for the units to settle down, then:
curl UNIT-ZERO-IP:8008/cluster | jq # There is one leader, one sync-standby and one replica, but we should have one leader and two sync-standby only.
juju add-unit postgresql -n 1
# Wait for the units to settle down, then:
curl UNIT-ZERO-IP:8008/cluster | jq # There is one leader, and three sync-standby, as expected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, need some time to play it without rush.
Let's merge it after release the current PostgreSQL VM to stable.
5082317 should fix the upgrade. Not adding an assertion to the integration tests because functionality will drift when the changes land on edge and stable. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The scenario where it's not working yet is the upgrade:
5082317 should fix the upgrade. Not adding an assertion to the integration tests because functionality will drift when the changes land on edge and stable.
Thanks, Dragomir! LGTM!
32839f8
to
98fcae3
Compare
@dragomirp please do not forget to add config option |
Set all replicas to sync