-
Notifications
You must be signed in to change notification settings - Fork 317
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Podman desktop should automatically load built images in the kind/microshift cluster #2866
Comments
while automation could be possible, it means that if you use the CLI without Podman Desktop running it won't work |
Yeah that would be fine, because I understood that we have two storages here. One helper could have a status in the image panel telling for each image that it's missing or not up to date on the kubernetes registry, WDYT ? |
Having indicator on the sync in the images list or image details could help yes I will add comments on #2623 (which is to have a more integrated way across all 3rd party kubernetes clusters) |
need to take care also that |
When using the minikube cluster, the image is built on the control plane node for the container runtime in use: https://minikube.sigs.k8s.io/docs/handbook/pushing/ (for So there is no need to load it afterwards, although you can build on the host and load it - if you prefer to do so. |
Hello, I was wondering how the automation could take place, because the option to load an image into a kubecluster is quite slow (more than 1,30 min to push 1GB image). I now it complicates a bit the enhancement but can we imagine podman desktop installs a registry in the kubernetes cluster and automates the push to that registry directly (rather than using kind load). ? Regards |
Thank you @vietk for opening this and offering some other solutions too. This is pretty important for being able to go from Pods / Containers to Kubernetes as images are probably the hardest part of the equation to have the Kubernetes cluster access. I'll need to do some more research, but you are right, there are two solutions:
Either way, we need:
|
Deploying a registry is still overkill for most, compared to something more simple like |
I agree, for development built-in functions such as For bare metal clusters, everyone's going to be using a different registry (sonatype, docker hub, quay, etc.) or something locally hosted. Instead of PD managing it, maybe we should just point to the correct documentation / say we do not support it. Images are tricky. If there happens to be a super-simple self-hosting registry solution available that works with most bare metal clusters, I would consider implementing it, but I'll have to do more research into that. |
Deploy the registry is rather straight-forward, but distributing the certificate is a pain. Most people cheat with HTTP, but then they need the insecure registry setting - so you end up with horrible hacks like the EDIT: I almost forgot about storage. That was the second biggest headache, for the cluster-wide deployment. |
Here are some historical references, for the kubernetes / minikube "registry" add-on: |
These methods are actually super-slow, if you are making a small change (build context) to a big image (layers). |
This could be fixed, by running cri-o inside the cluster and using podman to build images directly for it. This is what
If there was a registry deployed, that registry could expose a build service the same way. A slower workaround could be to deploy |
As long as it's transparent for the user and allow to "podman build -t .", I think it's a good solution for the whole issue |
I think that |
There are some hacks to use "docker build" both for containerd and for cri-o, but "podman build" is not a standard And unfortunately "docker buildx" diluted the other standard API anyway, so I think we're back to having no common Theoretically there could be some common The basic parameters would be same as for |
Yeah for sure, I was using k3d and exposed a local registry to avoid this exact issue. |
Previously, the flow was like:
Now, it is more something like:
With a registry, that would be:
The benefit here is that only the changed layers would need to be pushed, not all of them (including any large base) It still takes longer than not having to push anything at all (instant) like before, but at least it makes it slightly better. The fastest workflow is not doing new images at all but to use hot reloading, but not everything supports doing that. |
so true ! |
Since Podman Desktop does not support CLI tools like You could expose your So instead one exposes The
Unfortunately it is not possible to have Podman Desktop communicate with alternative runtime sockets. The value for |
hello, here is a summary about an attempt using minikube and cri-o by sharing some folders used by podman machine. https://gist.github.com/benoitf/3e45effb48e27791282eb227410f5950 I'll file an issue in minikube repository to see how we could integrate this option. |
Here is the issue in minikube repository kubernetes/minikube#17415 |
It seems like the podman driver with cri-o container runtime works rather poorly, with newer Podman versions (4.7.0)?
It worked on Ubuntu 20.04, but fails under podman machine. |
do you see any errors ? |
I posted them on minikube, but "yes". I was comparing the results between podman (3.4.2) and podman-remote-static (4.7.0). Both running as root, and both on Ubuntu 20.04. There are some issues with netavark and with cgroups v2...
Is it working OK on the Mac? |
This issue has been automatically marked as stale because it has not had activity in the last 6 months. It will be closed in 30 days if no further activity occurs. Please feel free to leave a comment if you believe the issue is still relevant. Thank you for your contributions! |
This issue has been automatically closed because it has not had any further activity in the last 30 days. Thank you for your contributions! |
Is your enhancement related to a problem? Please describe
Hello,
I am currently testing alternative solutions to Docker Desktop and its local Kubernetes cluster.
On my work flow, I often build a container image for my application and would like to test it directly on the local Kubernetes cluster.
The nice feature of docker desktop in this flow is that once the container image is built it is instantly available in the Kubernetes registry without no manual actions.
I saw that in Podman Deskop UI it's possible to push an image to the kind/openshift instance started by podman desktop.
Tasks
Describe the solution you'd like
Despite there's already a working solution (that I could try only on Kind), I think it could improved by making built images available as docker desktop is offering.
One drawback is that we may have two copies of the same container image, one living in the podman storage and one living in the container registry inside Kubernetes.
Thanks for reading
Regards
Describe alternatives you've considered
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: