Skip to content
This repository has been archived by the owner on Sep 30, 2020. It is now read-only.

What are the viable upgrade paths for a kube-aws 0.9.9 k8s 1.8 cluster to k8s 1.9 #1120

Closed
whereisaaron opened this issue Jan 29, 2018 · 8 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@whereisaaron
Copy link
Contributor

For existing 0.9.9 k8s 1.8.x clusters, what are the viable upgrade paths to 1.9.x?

First backup everything in sight, and then:

  1. Upgrade kube-aws to 0.9.10, review/render/validate cluster.yaml, run kube-aws update
  2. Stick with kube-aws 0.9.9 but bump the hypercube version to 1.9 and update the various container image versions, render/validate, and then run kube-aws update
  3. Create a new 1.9.x cluster with kube-aws 0.9.10, migrate the workload, destroy 1.8.x cluster
@cknowles
Copy link
Contributor

I’ve upgraded 1.8 to 1.9 using the kube-aws update command but had to delete all the secrets generated from service accounts and then cycle all the kube-system pods, after that it’s working. Side note, at least for me there’s an existing latest coreos issue that the Docker version won’t always terminate containers properly but that was present since roughly 1.8 timescales.

@whereisaaron
Copy link
Contributor Author

The @c-knowles, @camilb reports a similar experience, sometimes having to recreate secrets and restart kube-system pods (wonder why, if the certificates have not changed?). But that otherwise upgrades can work.

So probably better to stick with the same kube-aws version (in case of breaking changes to the CF stack template), but upgrade the component versions and hope for the best 😄

@whereisaaron
Copy link
Contributor Author

whereisaaron commented Apr 11, 2018

k8s 1.10.0 is released to stable but the last stable kube-aws release is still at 1.8.x. I guess all the keen people operate off the master branch, bleeding edge, so as to get current versions of k8s. But will new users pick up kube-aws if releases lag significantly behind k8s versions?

@c-knowles regards that container termination issue, CoreOS 1688.x.y has docker 17.12.1 which is supposed to fix the bug with not catching container exits.

@cknowles
Copy link
Contributor

@whereisaaron cool, thanks! I'll give it another go shortly. Ref for that issue - #1135.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 23, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 23, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants