Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: Add support for running multiple replicas #27

Closed
guerzon opened this issue Jun 9, 2023 · 10 comments · Fixed by #131
Closed

Feature: Add support for running multiple replicas #27

guerzon opened this issue Jun 9, 2023 · 10 comments · Fixed by #131
Assignees
Labels
enhancement New feature or request help wanted Extra attention is needed

Comments

@guerzon
Copy link
Owner

guerzon commented Jun 9, 2023

Requirements:

Proposal:

@guerzon guerzon added the enhancement New feature or request label Jun 9, 2023
@guerzon guerzon self-assigned this Jun 9, 2023
@fhera
Copy link

fhera commented Jan 18, 2024

Hi, ¿is vaultwarden possible use multiple replicas to deploy in HA?

Thank you @guerzon for this chart.

@guerzon
Copy link
Owner Author

guerzon commented Jan 25, 2024

¡Hola @fhera!

Right now there are issues running multiple copies of Vaultwarden. For example, the data directory (/data by default) contains application data such as attachments and it has to be visible to the multiple replicas.

It is possible to put it in an NFS filesystem, but I'm not sure if that's something you can do or even want. There might be other cloud-native alternatives for this though (?). Personally, I would like to see S3 support so we could point /data to an S3 bucket instead. If you or your organization has the resources, I encourage you to sponsor or submit a pull request on https://github.com/dani-garcia/vaultwarden.

One hack I found was to disable the ORG_ATTACHMENT_LIMIT but I'm pretty sure this is not enough.

I opened a discussion here to discuss the topic further.

Lester

@akelge
Copy link

akelge commented Feb 6, 2024

i don't think that the chart should take care of concurrent access to /data/, that is something that usually is managed by kubernetes itself, using ReadWriteMany PVCs, so it should be sufficient to add the PVC type and a good Kubernetes admin must know if he needs to use a RWO or RWX volume.

All in all, given how Bitwarden works there are not many concurrent accesses, 'cause the client(s) sync the vault when needed, then do not access the server at all, unless users use the WEB UI instead of any client.

One need for multiple replicas indeed could be related to HA, having two pods would help for rolling updates or in case of fault of one node, but still Kubernetes would take care of it, rescheduling pods on other nodes and doing, well, rolling upgrades, so there will always be a running pod.

@guerzon guerzon added the help wanted Extra attention is needed label Feb 21, 2024
@eirisdg
Copy link

eirisdg commented Mar 26, 2024

i don't think that the chart should take care of concurrent access to /data/, that is something that usually is managed by kubernetes itself, using ReadWriteMany PVCs, so it should be sufficient to add the PVC type and a good Kubernetes admin must know if he needs to use a RWO or RWX volume.

All in all, given how Bitwarden works there are not many concurrent accesses, 'cause the client(s) sync the vault when needed, then do not access the server at all, unless users use the WEB UI instead of any client.

One need for multiple replicas indeed could be related to HA, having two pods would help for rolling updates or in case of fault of one node, but still Kubernetes would take care of it, rescheduling pods on other nodes and doing, well, rolling upgrades, so there will always be a running pod.

I think @akelge is right. Helm must support deployment replication and don't take care about what Kubernetes should do with it.

@davidfrickert
Copy link

davidfrickert commented May 29, 2024

I've tested this a bit and it seems to work on this configuration:

kind: Statefulset (each replica has its own persistent storage, which is not sync'd)
replicas: 3
database: postgres
extra stuff: Service configured with traefik sticky sessions annotations:
traefik.ingress.kubernetes.io/service.sticky.cookie: "true"
traefik.ingress.kubernetes.io/service.sticky.cookie.name: "vaultwarden-sticky"

without the sticky sessions strange stuff happens as i get logged out immediately after logging in, i assume some sort of in memory data that is not yet shared with other replicas.

I don't use attachments and the icon cache does not warrant shared storage as it can be downloaded when needed so if this stays stable it seems decent enough.

davidfrickert added a commit to davidfrickert/vaultwarden that referenced this issue May 29, 2024
@derritter88
Copy link

I am using a loadbalancer service and an apache2 as a proxy for Vaultwarden.
When running > 1 replicas of Vaultwarden I am usually required to login multiple times.

@davidfrickert
Copy link

I am using a loadbalancer service and an apache2 as a proxy for Vaultwarden.
When running > 1 replicas of Vaultwarden I am usually required to login multiple times.

You need to setup some sort of sticky sessions, otherwise it doesn't work properly.

@derritter88
Copy link

I am using a loadbalancer service and an apache2 as a proxy for Vaultwarden.
When running > 1 replicas of Vaultwarden I am usually required to login multiple times.

You need to setup some sort of sticky sessions, otherwise it doesn't work properly.

I added

  sessionAffinity: ClientIP
  sessionAffinityConfig:
    clientIP:
      timeoutSeconds: 10800

to my loadbalancer service yml and this seems to do the proper job.

@guerzon
Copy link
Owner Author

guerzon commented Oct 30, 2024

A PR would be very welcome for this feature.

@guerzon
Copy link
Owner Author

guerzon commented Nov 18, 2024

Thanks all for the inputs. PR #131 created and merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants