Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changing MINIKUBE_HOME after VM creation does not work without cluster recreation #14466

Open
pre opened this issue Jun 29, 2022 · 6 comments
Open
Assignees
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/improvement Categorizes issue or PR as related to improving upon a current feature. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@pre
Copy link

pre commented Jun 29, 2022

What Happened?

Due to "enterprise reasons" we had to move ~/.minikube to ~/something_else/.minikube.

An ideal migration without cluster recreation would be:

  • minikube stop
  • mv ~/.minikube ~/something_else/.minikube
  • export MINIKUBE_HOME="${HOME}/something_else/.minikube"
  • minikube start

But what happens is changing MINIKUBE_HOME="${HOME}/something_else/.minikube" has no effect when cluster vm has already been created (log below).

This can be temporarily fixed with symlink ln -s ~/something_else/.minikube ~/.minikube.

The catch is that we a have an extensive locally scripted setup shared between dozen of developers. "It would be nice" if MINIKUBE_HOME was not set in stone in such a way that the cluster would not have to be recreated if MINIKUBE_HOME was changed and the files were moved externally from Minikube.

This would minimize the pain of "how to migrate to a newer local setup" in such a scenario where a locally scripted minikube setup is essential to a group of people.

❯ minikube version
minikube version: v1.25.2
commit: 362d5fdc0a3dbee389b3d3f1034e8023e72bd3a7

Attach the log file

* Restarting existing docker container for "design-time" ...
! StartHost failed, but will try again: provision: Error getting config for native Go SSH: open /Users/pre/.minikube/machines/design-time/id_rsa: no such file or directory
* Updating the running docker "design-time" container ...
* Failed to start docker container. Running "minikube delete -p design-time" may fix it: provision: Error getting config for native Go SSH: open /Users/pre/.minikube/machines/design-time/id_rsa: no such file or directory

X Exiting due to GUEST_SSH_CERT_NOT_FOUND: Failed to start host: provision: Error getting config for native Go SSH: open /Users/pre/.minikube/machines/design-time/id_rsa: no such file or directory
* Suggestion: minikube is missing files relating to your guest environment. This can be fixed by running 'minikube delete'
* Related issue: https://github.com/kubernetes/minikube/issues/9130

Operating System

macOS (Default)

Driver

Docker

@pre pre changed the title Changing MINIKUBE_HOME after VM creation does not work Changing MINIKUBE_HOME after VM creation does not work without cluster recreation Jun 29, 2022
@spowelljr
Copy link
Member

Hi @pre, thanks for reporting your suggestion to minikube.

Offhand I'm not sure the amount of work this would require, but I agree this would be an ideal use case and would support any PR that implements this.

@spowelljr spowelljr added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. priority/backlog Higher priority than priority/awaiting-more-evidence. kind/improvement Categorizes issue or PR as related to improving upon a current feature. labels Jun 29, 2022
@presztak
Copy link
Member

presztak commented Aug 7, 2022

Hi @spowelljr, I dig a little bit into this issue and the problem is caused that in ./minikube/machines/minikube/config.json file we have StorePath and other certs path set to the path which was returned by MiniPath function at the moment minikube start was invoked for the first time.

Potentially we could fix this in fixHost function

func fixHost(api libmachine.API, cc *config.ClusterConfig, n *config.Node) (*host.Host, error) {
and this should be relative easily for AuthOptions but for Driver it's a little bit tricky cause api.Load
h, err := api.Load(config.MachineName(*cc, *n))
is returning *host.Host and Driver is a interface without possibility to modify StorePath field.

In other approach we could directly modify json file but I don't think that this is the good idea.

What do you think about this?

@spowelljr
Copy link
Member

I thought of a few possible solutions:

Solution 1: Instead of storing the whole path for everything we could store the non-dynamic values and then just append it to MiniPath.

For example SSHKeyPath we store machines/minikube/id_rsa and then when we need the value: MiniPath + SSHKeyPath.

Problem 1: This would break configs from previously generated configs (ie. ones that have the full path)
Problem 2: This would likely require many code changes (maybe not true) to add MiniPath + to all places required
Problem 1 could be resolved by adding a flag to start such as --dynamic-config

Solution 2: Could add a config update command, it will go though the config and update paths to the new path.

Example: Copy .minikube from /Users/foo to /Users/bar
Run minikube config update
Would result in "SSHKeyPath": "/Users/foo/.minikube/machines/minikube/id_rsa being updated to
"SSHKeyPath": "/Users/babr/.minikube/machines/minikube/id_rsa

I'm open to other ideas as well

@presztak
Copy link
Member

/assign

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 8, 2023
@vaibhav2107
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/improvement Categorizes issue or PR as related to improving upon a current feature. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants