-
Notifications
You must be signed in to change notification settings - Fork 27
Support opinionated flow for injecting containers into /usr/lib/containers
#246
Comments
That said, this heavily intersects with #69 - to do this nicely we really want the installed container layers to be "flattened" into the outer container. |
This is something which would provide what I'm trying to do here We have use cases where we need an ostree image to contain a set of containers in the form of an additionalimagestore (storage.conf) which would allow the containers to run without the need for crio to download them from the network. |
It's important to understand that in the general case, it won't work to embed containers/storage written data inside ostree - or really, inside a container either. This is because the whiteouts don't nest. See e.g. https://stackoverflow.com/questions/67198603/overlayfs-inside-docker-container There are a few solutions:
Another approach instead is to have these containers in (This is a bit like what we do in OCP with the "release image" - that single container image has references to a big pile of other container images. Having lots of little floating containers really demands a means to tie them together in a coherent upgrade story) |
Ok, I'm confused now. What we are trying to achieve is something like add a layer on ostree with:
What we are doing so far here is a
then we put that in tarballs, in an srpm, that goes into different archtiecture rpms, that, on install time will:
I'm still in doubt if that's the purpose of this RFE, or if it is something different. We don't want the container layers to be squashed, we want a pristine copy (I guess that would help for container validation/signatures, not sure if it's 100% necessary to keep the integrity/authenticity of the layers though). |
When the "sub-container" content is rolled into a a container image itself, we are inherently relying on the outer container image for signatures and such to pull. |
There's actually two cases we could support; one, where the child image is "lifecycle bound" with the host ostree (container). The container images involved here are rolled into the toplevel container always. In theory we could support live updates to these containers in the same way we can support live updates to components of host userspace, but we'd normally expect system restarts. The second case is "preload optimization". Here, even though there are distinct storage systems involved, assuming that e.g. |
The fact that the files being added to the ostree image are themselves related to container images is only relevant insofar as it makes it feel weird to also put those files into an RPM. If the thing being added to the filesystem wasn't already in a package of some sort, it would be natural to say "just make an RPM" (or whatever format). Life cycling a container image with the ostree image (use case 2) would still mean we would have to package the content we want to add to the filesystem. We don't want the ostree image to consider the content of the "application image" itself, we just want it to place that application image in a specific place on the filesystem. We could do that by wrapping the application image in another image, which also feels a little odd but somehow less than using an RPM. Maybe because it's easy to imagine building and publishing a simple wrapper image using the same build pipeline and registry that are used for the application image. |
My understanding was these files aren't "related to" container images. They are container images. They're expected to be run as a container. |
Yeah, that's true. I'm stumbling a bit trying to come up with a way to differentiate between a container image that should be treated as a layer of the ostree and an image that should be treated as any other file that would be added to the filesystem. |
I am not sure that makes sense. Broadly speaking, either it is managed, or it is not. If it is managed by ostree, it should be underneath the read-only bind mount, and we will apply transactional offline update semantics to it in the same way we do all other content. It should not be mutated in place by some other process (e.g. we can't support We already have a place for content not managed by ostree: For the latter case, what may be desired here is the ability for e.g. Image Builder to inject these images into an ISO for the initial installation - distinct from the ostree updates. |
This would be possible but ugly - among other problems we'd have to propagate any pull secrets and such for the image to inside the container build. What seems more elegant here is a process that operates outside of the base image, something like:
OCI containers are just layers of tarballs. This tool would download There'd be no code executed as part of this, it's just basically writing the sub-container into an alternative root in a derived container image. |
Closing this for now as the focus is on the bootc side, where we now have trackers for |
For e.g. coreos layering we've sort of inherently been focusing on the use case of e.g.
rpm-ostree install usbguard
inside the container.However, there's no reason we can't support e.g.:
And
somecontainer.service
would be a systemd unit that uses systemd native features to run the container, e.g.RootDirectory
etc.; or runscrun/runc
.This crate is in a great position to implement this, because we're building up lots of tooling to bind ostree and containers.
The text was updated successfully, but these errors were encountered: