Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"minikube image build" doesn't respect registry-creds #16033

Closed
holograph opened this issue Mar 12, 2023 · 13 comments
Closed

"minikube image build" doesn't respect registry-creds #16033

holograph opened this issue Mar 12, 2023 · 13 comments
Labels
area/image Issues/PRs related to the minikube image subcommand kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@holograph
Copy link

What Happened?

The builds kicked off by the minikube image build don't respect the registry credentials input via the registry-creds add-on recommended in the documentation. This can be a problem if e.g. the build is based off of a base image that's on a private registry.

In fact, unless I'm missing something, there doesn't seem to be any way to provide credentials without resorting to injecting them via docker login (either via docker-env or SSH), and no way to parameterize the build container (e.g. with imagePullSecrets)? I'd be happy to lend a hand in testing or even implementing this, but this is a brand new code base (to me) in an unfamiliar language, so not promising anything...

(all of this of applies to the Docker driver in both Windows and MacOS)

Attach the log file

No relevant logs

Operating System

macOS (Default)

Driver

Docker

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 12, 2023

Listing expectations on the minikube image "framework" is still a good contribution, without code.

Normally the images aren't pushed anywhere (just saved/loaded), so no credentials are needed...

Alternatively the localhost:5000 hack is used for the insecure cluster registry, again no credentials.

I think as long as the files are on the VM (or KIC), it should be picked up by the container runtime ?

@afbjorklund afbjorklund added the area/image Issues/PRs related to the minikube image subcommand label Mar 12, 2023
@holograph
Copy link
Author

This isn't actually relevant to push, but just kicking off a build with FROM <some_private_registry>/base:latest already won't work without the appropriate credentials ahead of time.

I did originally intend to just load the image, but ran into a separate issue (#16032 as it happens), but since I ended up resorting to minikube docker-env followed by docker login anyway, it made me realize how silly it is to build locally and then tar-transfer-untar between two local Docker contexts. I therefore perceive minikube image build as sort of a short-hand for the above process, retaining container runtime semantics as appropriate, and without having to train people on switching between Docker contexts (very confusing for some, apparently, in my experience).

@holograph
Copy link
Author

Having a local registry (as you say, the localhost:5000 hack) is an extra complication that is arguably even harder to explain to relative newcomers than minikube docker-env. As for your final comment, I'm not sure what you mean by "if files are on the VM", but then I am fairly new to this stack 🤷

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 12, 2023

I'm not sure what you mean by "if files are on the VM"

I meant the config files with the "docker" credentials:

~/.docker/config.json ($DOCKER_CONFIG)

{
	"auths": {
		"https://index.docker.io/v1/": {
			"auth": "base64(username:password)"
		}
	}
}

A workaround is to set them up with ssh, as you say

And then docker/podman/buildctl will read that file.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 12, 2023

It would be nice if this could be done in an agnostic way, currently I think most local tools (docker, podman, nerdctl) just fall back to writing ~/.docker files directly - no login

I guess we do it the usual way, and just remove the D-word but keep the legacy interface. The current docker-flags (opt, env) were supposed to have generic "runtime" aliases

minikube image login [SERVER]

minikube image logout [SERVER]


Runtime flags, minikube start:

    --docker-env=[]:
	Environment variables to pass to the Docker daemon. (format: key=value)

    --docker-opt=[]:
	Specify arbitrary flags to pass to the Docker daemon. (format: key=value)
    --runtime-env=[]:
	Environment variables to pass to the runtime. (format: key=value)

    --runtime-opt=[]:
	Specify arbitrary flags to pass to the runtime. (format: key=value)

Buildtime flags, minikube image build:

    --build-env=[]:
	Environment variables to pass to the build. (format: key=value)

    --build-opt=[]:
	Specify arbitrary flags to pass to the build. (format: key=value)

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 12, 2023

@holograph : I think that if you use minikube image pull --remote, it will use the local credentials ?

That is, crane will pull the image from the registry with the default credentials and save it in the cache.
Then it will be loaded from the cache (as a tarball), transferred to the node, and loaded in the runtime.

remote.Image(ref, remote.WithAuthFromKeychain(authn.DefaultKeychain), remote.WithPlatform(p))


@afbjorklund
Copy link
Collaborator

but since I ended up resorting to minikube docker-env followed by docker login anyway, it made me realize how silly it is to build locally and then tar-transfer-untar between two local Docker contexts. I therefore perceive minikube image build as sort of a short-hand for the above process, retaining container runtime semantics as appropriate, and without having to train people on switching between Docker contexts (very confusing for some, apparently, in my experience).

This was the idea behind minikube image, indeed. You shouldn't need Docker, in order to use Kubernetes...

But people are very set in their ways, so I think we will be stuck with the legacy minikube docker-env too. 😔

@afbjorklund afbjorklund added the kind/feature Categorizes issue or PR as related to a new feature. label Mar 12, 2023
@holograph
Copy link
Author

@afbjorklund Unfortunately image pull --remote won't help, since it's a base image in a FROM clause that requires the credentials. In other words I'll have to manually build the base image stack and run pull on all of them -- not really practical.

As for docker-env, I think it's a wonderful tool that I'm very happy to have when debugging, but as you say - it should not be required for (relatively) basic usage patterns. The minikube image login approach seems reasonable, although perhaps integrating with registry-creds (since it's built-in to minikube anyway) would be cleaner... in any case I'd be happy to help test this - might even do more, but I don't make promises I'm not convinced I can keep :-)

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 14, 2023

The main problem with minikube docker-env is keeping it working, when we are no longer using Docker anywhere.

And since the legacy TCP API is no longer recommended or even supported, there's all the fun of SSH connections...

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 12, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 12, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/image Issues/PRs related to the minikube image subcommand kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants