Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

statefulset: Use stateful sets for posgres deployments (PROJQUAY-6672) #927

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

jonathankingfc
Copy link
Collaborator

  • Swap Postgres and Clair Postgres Deployments to StatefulSets

@@ -1,5 +1,5 @@
apiVersion: apps/v1
kind: Deployment
kind: StatefulSet
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how will this work with existing deployments? On upgrade will it replace the deployment with a statefulset?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it will replace the deployment with a stateful set. This is so that we do not have issues with overlapping deployments that would cause issues on the upgrade job.

replicas: 1
strategy:
type: Recreate
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will switch the deployment to a rolling upgrade, which could cause the old and new pod to run at the same time. Will that cause issues?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since it is a stateful set, it will only run one pod at a time

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In theory, issues can occur where Quay is operating on a Clair that has a split-brain during a deployment (i.e. the security worker requests the indexing state from a new pod, sees it has changed then makes an indexing request but it goes to an old pod). In practice, this is how it works in production and we've never seen problems so I think it's probably fine but something to document.

@@ -1,5 +1,5 @@
apiVersion: apps/v1
kind: Deployment
kind: StatefulSet
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The original issue seems to be caused by the old and the new db pods being matched by the same service, making the requests against the db go to either or randomly. Does switching to stateful sets fix this issue?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, since their will be only one underlying pod attached to the service

@jonathankingfc
Copy link
Collaborator Author

/retest

1 similar comment
@bcaton85
Copy link
Collaborator

bcaton85 commented Sep 9, 2024

/retest

- Swap Postgres and Clair Postgres Deployments to StatefulSets
Copy link

openshift-ci bot commented Oct 15, 2024

@jonathankingfc: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/ocp-latest-e2e 0ba46ee link true /test ocp-latest-e2e

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

3 participants