Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Podman desktop should automatically load built images in the kind/microshift cluster #2866

Closed
3 of 5 tasks
vietk opened this issue Jun 15, 2023 · 27 comments
Closed
3 of 5 tasks
Assignees
Labels
area/dashboard 📊 Concern the dashboard from Container Desktop area/kubernetes ☸️ kind/epic⚡ Issue describing an epic lifecycle/stale

Comments

@vietk
Copy link

vietk commented Jun 15, 2023

Is your enhancement related to a problem? Please describe

Hello,

I am currently testing alternative solutions to Docker Desktop and its local Kubernetes cluster.
On my work flow, I often build a container image for my application and would like to test it directly on the local Kubernetes cluster.
The nice feature of docker desktop in this flow is that once the container image is built it is instantly available in the Kubernetes registry without no manual actions.

I saw that in Podman Deskop UI it's possible to push an image to the kind/openshift instance started by podman desktop.

Push your image to your Kind cluster:
Search images: enter your image name my-custom-image to find the image.
Click > Push image to Kind cluster.

Tasks

Describe the solution you'd like

Despite there's already a working solution (that I could try only on Kind), I think it could improved by making built images available as docker desktop is offering.

One drawback is that we may have two copies of the same container image, one living in the podman storage and one living in the container registry inside Kubernetes.

Thanks for reading
Regards

Describe alternatives you've considered

No response

Additional context

No response

@vietk vietk added the kind/enhancement ✨ Issue for requesting an improvement label Jun 15, 2023
@vietk vietk changed the title Podman desktop should automatically load image in the kind/microshift cluseter Podman desktop should automatically load built images in the kind/microshift cluster Jun 15, 2023
@benoitf
Copy link
Collaborator

benoitf commented Jun 15, 2023

while automation could be possible, it means that if you use the CLI without Podman Desktop running it won't work
Would it be fine on your usecase ?

@vietk
Copy link
Author

vietk commented Jun 15, 2023

Yeah that would be fine, because I understood that we have two storages here.

One helper could have a status in the image panel telling for each image that it's missing or not up to date on the kubernetes registry, WDYT ?

@benoitf
Copy link
Collaborator

benoitf commented Jun 15, 2023

Having indicator on the sync in the images list or image details could help yes

I will add comments on #2623 (which is to have a more integrated way across all 3rd party kubernetes clusters)

@benoitf
Copy link
Collaborator

benoitf commented Jun 15, 2023

need to take care also that Image pull policy is not Always on the cluster

@benoitf benoitf added area/dashboard 📊 Concern the dashboard from Container Desktop area/kubernetes ☸️ and removed status/need-triage labels Jun 15, 2023
@afbjorklund
Copy link
Contributor

When using the minikube cluster, the image is built on the control plane node for the container runtime in use:

https://minikube.sigs.k8s.io/docs/handbook/pushing/

(for cri-o, that means doing a sudo podman build)

So there is no need to load it afterwards, although you can build on the host and load it - if you prefer to do so.

@vietk
Copy link
Author

vietk commented Jun 19, 2023

Hello,

I was wondering how the automation could take place, because the option to load an image into a kubecluster is quite slow (more than 1,30 min to push 1GB image).
I imagine that the intrinsic of the push is done using the command podman save image and then kind load image, I made the test it takes more or less the same time to be pushed on kind storage.

I now it complicates a bit the enhancement but can we imagine podman desktop installs a registry in the kubernetes cluster and automates the push to that registry directly (rather than using kind load). ?
It could be a portable way of implementing automatic push for any k8s distribution as long as we can access a registry and would probably faster.

Regards

@benoitf benoitf self-assigned this Aug 16, 2023
@cdrage
Copy link
Contributor

cdrage commented Aug 16, 2023

Thank you @vietk for opening this and offering some other solutions too.

This is pretty important for being able to go from Pods / Containers to Kubernetes as images are probably the hardest part of the equation to have the Kubernetes cluster access.

I'll need to do some more research, but you are right, there are two solutions:

  • Using kind load image and adding it to the kind cluster
  • If using another Kubernetes cluster such as k3s, PERHAPS deploying a registry / making it accessible.

Either way, we need:

  • Documentation on how to do this correctly with multiple clusters whether it is AWS, k3s, baremetal, kind, minishift, etc.
  • WAYS to do this such as the method we use now to "push to kind" as well as perhaps other k8s clusters.

@benoitf benoitf assigned cdrage and unassigned benoitf Aug 16, 2023
@cdrage cdrage moved this from 📋 Backlog to 📅 Planned in Podman Desktop Planning Aug 17, 2023
@afbjorklund
Copy link
Contributor

afbjorklund commented Aug 18, 2023

  • If using another Kubernetes cluster such as k3s, PERHAPS deploying a registry / making it accessible.

Deploying a registry is still overkill for most, compared to something more simple like k3d image import.

@cdrage
Copy link
Contributor

cdrage commented Aug 21, 2023

I agree, for development built-in functions such as k3d image import as well as kind load image are ideal.

For bare metal clusters, everyone's going to be using a different registry (sonatype, docker hub, quay, etc.) or something locally hosted. Instead of PD managing it, maybe we should just point to the correct documentation / say we do not support it.

Images are tricky. If there happens to be a super-simple self-hosting registry solution available that works with most bare metal clusters, I would consider implementing it, but I'll have to do more research into that.

@afbjorklund
Copy link
Contributor

afbjorklund commented Aug 21, 2023

Images are tricky. If there happens to be a super-simple self-hosting registry solution available that works with most bare metal clusters, I would consider implementing it, but I'll have to do more research into that.

Deploy the registry is rather straight-forward, but distributing the certificate is a pain. Most people cheat with HTTP, but then they need the insecure registry setting - so you end up with horrible hacks like the localhost:5000 proxy etc

EDIT: I almost forgot about storage. That was the second biggest headache, for the cluster-wide deployment.
The registry works fine for a while, until you redeploy it and all the images were in volumes on the old node.

@afbjorklund
Copy link
Contributor

Here are some historical references, for the kubernetes / minikube "registry" add-on:

@afbjorklund
Copy link
Contributor

I agree, for development built-in functions such as k3d image import as well as kind load image are ideal.

These methods are actually super-slow, if you are making a small change (build context) to a big image (layers).

@afbjorklund
Copy link
Contributor

afbjorklund commented Aug 22, 2023

One drawback is that we may have two copies of the same container image, one living in the podman storage and one living in the container registry inside Kubernetes.

This could be fixed, by running cri-o inside the cluster and using podman to build images directly for it. This is what minikube is doing (with image build), for instance...

If there was a registry deployed, that registry could expose a build service the same way. A slower workaround could be to deploy kubectl build , and do the build in a cluster pod.

@vietk
Copy link
Author

vietk commented Aug 22, 2023

This could be fixed, by running cri-o inside the cluster and using podman to build images directly for it. This is what minikube is doing (with image build), for instance...

As long as it's transparent for the user and allow to "podman build -t .", I think it's a good solution for the whole issue

@afbjorklund
Copy link
Contributor

I think that kind running on Docker Desktop would have the same issues as kind running on Podman Desktop.

@afbjorklund
Copy link
Contributor

afbjorklund commented Aug 22, 2023

As long as it's transparent for the user and allow to "podman build -t .", I think it's a good solution for the whole issue

There are some hacks to use "docker build" both for containerd and for cri-o, but "podman build" is not a standard

And unfortunately "docker buildx" diluted the other standard API anyway, so I think we're back to having no common

Theoretically there could be some common buildImage, but otherwise we are stuck with waiting on loadImage

The basic parameters would be same as for minikube image build and minikube image load CLI commands.

@vietk
Copy link
Author

vietk commented Aug 22, 2023

I think that kind running on Docker Desktop would have the same issues as kind running on Podman Desktop.

Yeah for sure, I was using k3d and exposed a local registry to avoid this exact issue.

@afbjorklund
Copy link
Contributor

afbjorklund commented Aug 23, 2023

Previously, the flow was like:

docker build
docker run

Now, it is more something like:

engine1 build
engine1 save
engine2 load
engine2 run

With a registry, that would be:

engine1 build
engine1 push
engine2 pull
engine2 run

The benefit here is that only the changed layers would need to be pushed, not all of them (including any large base)

It still takes longer than not having to push anything at all (instant) like before, but at least it makes it slightly better.

The fastest workflow is not doing new images at all but to use hot reloading, but not everything supports doing that.

@vietk
Copy link
Author

vietk commented Aug 23, 2023

The benefit here is that only the changed layers would need to be pushed, not all of them (including any large base)

so true !

@nichjones1 nichjones1 assigned benoitf and unassigned cdrage Sep 27, 2023
@afbjorklund
Copy link
Contributor

Since Podman Desktop does not support CLI tools like nerdctl and crictl, I added some notes on using Docker API.

You could expose your podman.sock, but that would require that the kind cluster used cri-o - instead of containerd...

So instead one exposes nerdctl.sock and talk directly to that for loading (containerd) and building (buildkitd) images.

The docker client takes care of sending the tarball, with the image archive or the build context, as part of the command.


docker load

  • nerdctl load => containerd
  • podman load => cri-o

DOCKER_BUILDKIT=0 docker build .

  • nerdctl build => containerd (using buildkit)
  • podman build => cri-o (using buildah)

Unfortunately it is not possible to have Podman Desktop communicate with alternative runtime sockets.

The value for docker.sock is hardcoded within the PD application, so that it only talks to the local Docker.

@benoitf benoitf moved this from 📅 Planned to 🚧 In Progress in Podman Desktop Planning Oct 2, 2023
@benoitf
Copy link
Collaborator

benoitf commented Oct 12, 2023

hello, here is a summary about an attempt using minikube and cri-o by sharing some folders used by podman machine.

https://gist.github.com/benoitf/3e45effb48e27791282eb227410f5950

I'll file an issue in minikube repository to see how we could integrate this option.

@benoitf
Copy link
Collaborator

benoitf commented Oct 13, 2023

Here is the issue in minikube repository kubernetes/minikube#17415

@afbjorklund
Copy link
Contributor

afbjorklund commented Oct 19, 2023

It seems like the podman driver with cri-o container runtime works rather poorly, with newer Podman versions (4.7.0)?

minikube start --driver=podman --container-runtime=cri-o

It worked on Ubuntu 20.04, but fails under podman machine.

@benoitf
Copy link
Collaborator

benoitf commented Oct 19, 2023

do you see any errors ?

@afbjorklund
Copy link
Contributor

afbjorklund commented Oct 19, 2023

do you see any errors ?

I posted them on minikube, but "yes". I was comparing the results between podman (3.4.2) and podman-remote-static (4.7.0). Both running as root, and both on Ubuntu 20.04. There are some issues with netavark and with cgroups v2...

podman-remote-static machine init --now --rootful --cpus 2
export CONTAINER_CONNECTION=podman-machine-default-root

Is it working OK on the Mac?

@benoitf benoitf added kind/epic⚡ Issue describing an epic and removed kind/enhancement ✨ Issue for requesting an improvement labels Nov 2, 2023
Copy link
Contributor

This issue has been automatically marked as stale because it has not had activity in the last 6 months. It will be closed in 30 days if no further activity occurs. Please feel free to leave a comment if you believe the issue is still relevant. Thank you for your contributions!

Copy link
Contributor

This issue has been automatically closed because it has not had any further activity in the last 30 days. Thank you for your contributions!

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 20, 2024
@github-project-automation github-project-automation bot moved this from 🚧 In Progress to ✔️ Done in Podman Desktop Planning Jun 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/dashboard 📊 Concern the dashboard from Container Desktop area/kubernetes ☸️ kind/epic⚡ Issue describing an epic lifecycle/stale
Projects
None yet
Development

No branches or pull requests

5 participants