Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for suspending KubeVirt VMs instead of deletion during coder stop #11

Open
hh opened this issue Dec 7, 2022 · 2 comments
Open

Comments

@hh
Copy link
Contributor

hh commented Dec 7, 2022

This will likely require research using count = data.coder_workspace.me.start_count set appropriately on resources that need to be spun down vs persisting through shutdown.

Since most of our kubernetes resources are created dynamically via cluster-api, we will need to leave them 'up' and only run what is necessary to suspend the created VMs.

I would suggest figuring out how to suspend / restore the created KubeVirt VMs without updating the template first. Then work back and add support for the appropriate terraform resource to do the same.

@BobyMCbobs
Copy link
Contributor

I thought about setting the TalosControlPlane.spec.replicas field to 0 but it appears to not be allowed

   - lastTransitionTime: "2022-12-07T21:29:04Z"
     message: Cannot scale down control plane nodes to 0%!!(MISSING)(EXTRA *int32=0xc0017106c0,
       int=1)

@hh
Copy link
Contributor Author

hh commented Dec 7, 2022 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants