-
-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
consolidate image names #293
Comments
What was the reason we were not using multiarch images? For easier pinnings? 🤔 edit: nvm, found it. |
I wanted to see what this looked like in the pinning, so I opened a demo PR. However, while putting that together, I realized that we can do even better, by using the same image tags as we use for our distro-naming, so we can just directly insert docker_image: # [os.environ.get("BUILD_PLATFORM", "").startswith("linux-")]
# images for non-CUDA-enabled builds
- quay.io/condaforge/linux-anvil-x86_64:{{ environ.get("DEFAULT_LINUX_VERSION", "alma9") }} # [os.environ.get("BUILD_PLATFORM") == "linux-64"]
- quay.io/condaforge/linux-anvil-aarch64:{{ environ.get("DEFAULT_LINUX_VERSION", "alma9") }} # [os.environ.get("BUILD_PLATFORM") == "linux-aarch64"]
- quay.io/condaforge/linux-anvil-ppc64le:{{ environ.get("DEFAULT_LINUX_VERSION", "alma9") }} # [os.environ.get("BUILD_PLATFORM") == "linux-ppc64le"]
# images for CUDA 11.8 builds (no choice via DEFAULT_LINUX_VERSION available)
- [omitted here]
# images for CUDA 12 builds
- quay.io/condaforge/linux-anvil-x86_64:{{ environ.get("DEFAULT_LINUX_VERSION", "alma9") }} # [linux64 and os.environ.get("CF_CUDA_ENABLED", "False") == "True" and os.environ.get("BUILD_PLATFORM") == "linux-64"]
# case: native compilation (build == target)
- quay.io/condaforge/linux-anvil-aarch64:{{ environ.get("DEFAULT_LINUX_VERSION", "alma9") }} # [aarch64 and os.environ.get("CF_CUDA_ENABLED", "False") == "True" and os.environ.get("BUILD_PLATFORM") == "linux-aarch64"]
- quay.io/condaforge/linux-anvil-ppc64le:{{ environ.get("DEFAULT_LINUX_VERSION", "alma9") }} # [ppc64le and os.environ.get("CF_CUDA_ENABLED", "False") == "True" and os.environ.get("BUILD_PLATFORM") == "linux-ppc64le"]
# case: cross-compilation (build != target)
- quay.io/condaforge/linux-anvil-x86_64:{{ environ.get("DEFAULT_LINUX_VERSION", "alma9") }} # [aarch64 and os.environ.get("CF_CUDA_ENABLED", "False") == "True" and os.environ.get("BUILD_PLATFORM") == "linux-64"]
- quay.io/condaforge/linux-anvil-x86_64:{{ environ.get("DEFAULT_LINUX_VERSION", "alma9") }} # [ppc64le and os.environ.get("CF_CUDA_ENABLED", "False") == "True" and os.environ.get("BUILD_PLATFORM") == "linux-64"] I've updated the OP to reflect that proposal. This also has the advantage that if we ever do have different distros per generation, the image tag can absorb that distinction. |
I've tried rerendering an affected feedstock with the setup from conda-forge/conda-forge-pinning-feedstock#6687, and I think this might not work. AFAICT, using jinja variables in In other words, the variables don't get resolve, but rather just get inserted as follows --- a/.azure-pipelines/azure-pipelines-linux.yml
+++ b/.azure-pipelines/azure-pipelines-linux.yml
@@ -11,39 +11,45 @@ jobs:
linux_64_cuda_compilerNonecuda_compiler_versionNonecxx_compiler_version13:
CONFIG: linux_64_cuda_compilerNonecuda_compiler_versionNonecxx_compiler_version13
UPLOAD_PACKAGES: 'True'
- DOCKER_IMAGE: quay.io/condaforge/linux-anvil-alma-x86_64:9
+ DOCKER_IMAGE: quay.io/condaforge/linux-anvil-x86_64:{{ environ.get("DEFAULT_LINUX_VERSION",
+ "alma9") }}
linux_64_cuda_compilercuda-nvcccuda_compiler_version12.0cxx_compiler_version12:
CONFIG: linux_64_cuda_compilercuda-nvcccuda_compiler_version12.0cxx_compiler_version12
UPLOAD_PACKAGES: 'True' which isn't going to work. 😑 |
Following the core call today, we ended up with the following set
|
Thanks Axel! 🙏 This seems reasonable to me Would it make sense to have a news entry with this change? Asking in case others are using our images and will need to make updates |
The old images will still be available, but not a problem to write a short update for this once #291 lands. |
I think it'll be worthwhile anyway to write an update when the default image changes, which would also provide a canonical place where we can tell people to stay on the oldest image for feedstocks that are doing binary repackaging. |
Certainly though sometimes folks don't notice and pick up the old image by accident ( for example: #277 (comment) ) Indeed. Think it makes sense to have one news entry for all of these changes. As noted the old images will be around so we can do this after making the changes if that is easier |
Closed by #291 |
As of #287 / #290 / #291, we'll have the following images
I think it would be nice to could consolidate this to (updated!; v0 below)
This has the advantage that we don't have to duplicate the specification in the pinning across image versions, because we can just directly insert
DEFAULT_LINUX_VERSION
through jinja (see comment below).Old proposal
This is because the introduction of another distro for the same RHEL generation (e.g. rocky8 next to alma8) is completely unrealistic at the moment, so there's no need to encode the distro in the image name. And if that time ever comes, we could still rename the images again.
If we want to encode the distro version also in the CUDA images, we could do
but I think this is not necessary because the CUDA 11.8 images are on the way out. It would only become relevant IMO if some dependency starts requiring
__glibc>=2.34
while we're still supporting 11.8 (though we could also just move the CUDA 11.8 images to ubi9 universally in that case, c.f. conda-forge/conda-forge-pinning-feedstock#6283). Note that there are no ubi9 images yet (and neither are there cos7 images for aarch/ppc).The text was updated successfully, but these errors were encountered: