-
-
Notifications
You must be signed in to change notification settings - Fork 506
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use alma9 images on linux, while staying on 2.17 sysroot #6283
Comments
It would be great if we could do this for aarch + cuda12 to start. But I think we should generally move the base image forward. xref: https://github.com/conda-forge/pytorch-cpu-feedstock/blob/main/recipe/conda_build_config.yaml#L17 |
AFAIU the only remaining issue is the reduction of CDTs from cos7 to alma8; we should try to do a special "remove/replace CDTs" migration because breaking 100+ feedstocks is not really a good option, even if we provide a way to opt into the old images again. |
Ah i see, the tk feedstock throws quite a thorn in the "rip out all the CDTs too". |
Using alma8 images and using an up-to-date sysroot are two different problems. In order to change to alma8 images, we just need to check to see if all the requirements in |
It doesn't have to be an exhaustive check, but just a few feedstocks that use yum requirements to see if it really works. |
Wonder if a smaller step would simply making it a bit easier for users to opt-in to using the newer images Did a little exploration of this in PR: #6548 Likely needs more work, but maybe it is already useful for discussion/iteration at this stage |
So I took the content of all the
Footnotes
|
Thanks for pulling together this list Axel and going over it in today's call! 🙏 Cleaned up the What do we think about making a migrator? At least with X11, this seems essential given that gets pulled in all over the place. Though this may extend to other places |
My understanding was that a migrator is not necessary. The yum_requirements will continue to work, and if CDTs end up missing they can be replaced with our own deps, or alternatively users set |
In the discussion in #6548, @carterbox made the point (IIUC) that we may want to simply use alma 9(!) images by default, even if we keep the sysroot at 2.17 (== cos7). The reason being (paraphrasing according to my interpretation) that it removes one mostly unnecessary dimension from the whole pinning exercise. In principle I think this sounds like a good idea to me - the actual glibc in the image doesn't matter from the POV of building packages or any of our metadata, it only needs to be new enough to run binaries for building or testing that need newer symbols (resp. to resolve the tests environment if any dependency - including the package being built - requires So if someone decides to use the 2.34 sysroot in the near future, then the question about how to change the image version would simply be obsolete, if the containers are always the newest ones (matching the rest of our infrastructure, of course). The question becomes what if any failures are possible if the container image is too new. I suspect that this would be very rare (otherwise we would have been hitting such cases all the time when we used cos7 images to build cos6 stuff). |
This summarizes my point correctly. I am also curious if anyone can think of a case in which having a container that is too new would cause problems. Maybe this is something for the next core meeting. edit: There aren't any meeting notes for the next meeting available yet. |
Usually not. There are some rare cases. For eg:
|
That matches my understanding too. And obviously those cases could still choose alma8 (or even cos7) images. We can also leave the
Here you go: conda-forge/conda-forge.github.io#2350 |
Minor comment: I do not think |
Yes, that's part of the compiletime-runtime split between CDTs and |
I'm encountering a strange issue in conda-forge/pyarrow-feedstock#139, where a dependency has been compiled against
This happens only in the cross-compilation builds, but it happens in all of them. And that despite
clearly having a new enough glibc, as confirmed by
So I have no idea what's going wrong there, though it could be something related to QEMU not providing full emulation for glibc 2.28 yet? I don't actually know how that bit works.... |
Happy to chat if you are still seeing issues. Follow up in this thread: conda-forge/pyarrow-feedstock#139 (comment) |
Pretty sure the solution there was conda-forge/conda-forge-ci-setup-feedstock#368 |
Here's another follow-up for the image switch: conda-forge/docker-images#299 |
Since it's taken us so long to upgrade from centOS 6 to 7, we'll very quickly find ourselves in a situation that we struggled with already some time ago (see discussion in this issue): a growing number of feedstocks will require a newer glibc, and our images should provide a new enough baseline version, so that the only thing that feedstocks need to actively override is
c_stdlib_version
(and thus the sysroot), but nothing else.For example, google's "foundational" support matrix defines a lower bound of glibc 2.27, meaning that things like abseil/bazel/grpc/protobuf/re2 etc. will start beginning to rely on glibc features >2.17 in the near future (and even though it's not a google project, one of the baseline dependencies of that stack is starting to require it).
We can handle the
c_stdlib_version
in the migrators (like we did for macOS 10.13, before the conda-forge-wide baseline was lifted), but changing more than one key of the mega-zip involving the docker images is really painful, especially if CUDA is involved (example), so having the images be alma8 by default will be very helpful there.To a lesser extent, it will also save us from run-time bugs in older glibc versions, like we had with some broken trigonometry functions in 2.12 (before the image moved to cos7 while the sysroot still stayed at 2.12).
There are other scenarios still where this is necessary, see the discussion here for example.
While it's already possible to use alma8 images, the main thing we're blocked is the lack of CDTs for alma 8 (pending conda-forge/cdt-builds#66), c.f. conda-forge/conda-forge.github.io#1941
This issue is for tracking this effort, and any other eventual tasks that need to be resolved before doing so.
The text was updated successfully, but these errors were encountered: