You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since we've separated the baselines between docker_images and the sysroot (and the docker_image bump has now happened), this question will inevitably come up, so we might as well have an issue to reference and discuss.
Note: this is likely still years away!
With that said, here's the background. The timeline of changes for our images and sysroot baseline is as follows
when
sysroot
docker image
comment
until Nov. 2021
2.12 (cos6)
cos6
matching distro version, until discussion here eventually lead to split
This needed a lot of preparation (mainly rolling out stdlib); finally done here
Nov 2024
2.17 (cos7)
alma9
Due to how long things got held up (for stdlib & CDTs), we actually ended up leapfrogging alma8 and went directly to alma9; see discussion and execution
???
?
alma10
Will likely release in 2025, but it's unlikely to really be necessary as infrastructure for packages until at least 2-3 years after
???
2.28 (alma8)
?
The topic of this issue 🙃
The reason we are able to separate these two things is because we can distinguish between what we compile against (i.e. the sysroot that we package and pin to a specific version), from the actual glibc version that's present in the image. Some more explanation is in this announcement.
Part of the discussion in #1436 and why it took so long to move on, is that conda-forge has some large users (e.g. academic institutions) that move glacially w.r.t. infrastructure updates, and we try to support those use-cases as long as we feasibly can (not least because some core members are involved there). That meant we went all the way to the very end-of-life (even with the paid extension that's not actually relevant for us!) of cos6 before dropping it. This was not actually a causal relationship, but just a reflection of how much we've tried to stay on cos6 as long as feasible.
For the move from cos7 to alma8, the situation will very likely repeat, in that we want to be as defensive as possible with upgrading the baseline, while enabling individual feedstocks to pull in newer baseline (using c_stdlib_version) as necessary. A good reference is https://mayeut.github.io/manylinux-timeline/, which provides an overview of the consumer glibc version across all downloads from PyPI1. As of Nov. 2024, around 85% of PyPI downloaders on linux have __glibc>=2.28, and this number rises to 94% if we're not counting people on python versions that are already EOL.
Note however that we didn't drop glibc 2.12 until it was below 0.1% of downloads (at least via the proxy of the PyPI downloads). Perhaps that's a little extreme (and also owed due to the stdlib-work that took a while to put in place), but in any case, it will likely have to be a very low percentage of users or there'd have to be very strong technical constrains that force our hand before we move on.
In short: enjoy the new alma-based infrastructure, move the glibc baseline for your feedstock if it cannot be avoided, but the global baseline will likely still take quite a while to move.
Footnotes
I'd really like to have an equivalent for conda-forge, but that's another story ↩
The text was updated successfully, but these errors were encountered:
Since we've separated the baselines between docker_images and the sysroot (and the docker_image bump has now happened), this question will inevitably come up, so we might as well have an issue to reference and discuss.
Note: this is likely still years away!
With that said, here's the background. The timeline of changes for our images and sysroot baseline is as follows
stdlib
); finally done hereleapfrogging alma8 and went directly to alma9; see discussion and execution
as infrastructure for packages until at least 2-3 years after
The reason we are able to separate these two things is because we can distinguish between what we compile against (i.e. the sysroot that we package and pin to a specific version), from the actual glibc version that's present in the image. Some more explanation is in this announcement.
Part of the discussion in #1436 and why it took so long to move on, is that conda-forge has some large users (e.g. academic institutions) that move glacially w.r.t. infrastructure updates, and we try to support those use-cases as long as we feasibly can (not least because some core members are involved there). That meant we went all the way to the very end-of-life (even with the paid extension that's not actually relevant for us!) of
cos6
before dropping it. This was not actually a causal relationship, but just a reflection of how much we've tried to stay on cos6 as long as feasible.For the move from
cos7
toalma8
, the situation will very likely repeat, in that we want to be as defensive as possible with upgrading the baseline, while enabling individual feedstocks to pull in newer baseline (usingc_stdlib_version
) as necessary. A good reference is https://mayeut.github.io/manylinux-timeline/, which provides an overview of the consumer glibc version across all downloads from PyPI1. As of Nov. 2024, around 85% of PyPI downloaders on linux have__glibc>=2.28
, and this number rises to 94% if we're not counting people on python versions that are already EOL.Note however that we didn't drop glibc 2.12 until it was below 0.1% of downloads (at least via the proxy of the PyPI downloads). Perhaps that's a little extreme (and also owed due to the stdlib-work that took a while to put in place), but in any case, it will likely have to be a very low percentage of users or there'd have to be very strong technical constrains that force our hand before we move on.
In short: enjoy the new alma-based infrastructure, move the glibc baseline for your feedstock if it cannot be avoided, but the global baseline will likely still take quite a while to move.
Footnotes
I'd really like to have an equivalent for conda-forge, but that's another story ↩
The text was updated successfully, but these errors were encountered: