-
-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docker image workflow fails on the master branch #191
Comments
What the actual error isTriggered from a push to the
Why this is triggered nowThe changes in #186 only enabled the workflow to use the correct preCICE Docker image tag when building on So, while this only appears now, it was always an issue that was not triggered. The changes of @valentin-seitz are correct, provided that both images are consistent (keep reading). It was not triggered so far, because it would only be triggered on What triggers this errorIndeed, the problem here is the current inconsistency in the setup between the These originate from this GitHub Actions workflow of preCICE:
We do have plenty of images we eventually need to clean up. They were so far serving different purposes, but we need to synchronize them to cover more use cases. I summon @fsimonis for that. How to proceed
In the system tests Dockerfile (which is a multi-stage build), we manually create a user (named |
The images serve different purposes:
This is the way to go. pip on Ubuntu 24.04 doesn't allow installing via pip even when passing |
Do we know of any users that use these, what they do with them, and whether it would actually make more sense to provide a user there? The naming is definitely confusing. I would always expect two images with same names but different tags to only differ in (time-dependent) versions. |
I am not aware of any users, it was a request at some point. By whom, I don't know. Personally, I don't see a big value as we provide a debian package. A reproducible image specifies the version of the base image. So, the user can copy&paste the url of the matching deb image, or even checkin the deb package into the source tree.
I agree with all points. What's the solution though? Maybe we need some kind of tooling or develop image which installs precice in an ubuntu ci image for all versions. Some ideas:
|
So, something like the current
|
A remark from my side: As a first step I would suggest to change the following section of the code: python-bindings/.github/workflows/build-docker.yml Lines 35 to 44 in 6de8993
We did a pre-release that succeeded and only when we merged into master we ran into a failure. This is too late. A pre-release exists exactly for the purpose of testing the packaging and distribution workflow and this is currently not possible. Before changing the way how things are organized and named I think we should make sure that the problem Ishaan desribes above can be reproduced and this part can be tested without doing an actual live release. |
For a long term solution we should go for both. One does not exclude the other. The venv approach seems to be the way to go from the python perspective as @fsimonis mentions above. But this only cures the symptom. Not having the user |
The image we need is Symptom or not, there are two issues that need to be resolved:
I propose the following:
For some reason, DockerHub only provides per-image statistics and no per-tag statistics, so we cannot differentiate between our CI pulls and people actually using the release images. At the time of writing, the image is at 15101 pulls total. Thus, separating release from developer images would allow us to track meaningful downloads of the release images. Workflows exists and need to be moved around a bit. So, we only need a name for this image |
Transferring some points from the discussion in the dev channel. @IshaanDesai is wondering what is wrong with @BenjaminRodenberg asks "Why can we not use the same dockerimage for the release and the CI?". I like this direction, and we essentially need to restructure the images so that the CI is based on the same base image as the release. I would imagine that the base image builds and installs the Debian package, based on some tag. @fsimonis What additional tool or other change wouldn't we be able to install on top of that? @BenjaminRodenberg also suggested keeping the CI images under a separate namespace/organization, such as We could then have @fsimonis raised the point that we currently don't even know if people are using the released images. To that, I think the main question is whether we want people to use those. If yes, we can advertise them and some will do. He also raised the fact that we don't have statistics. This is important, but I would not see it as the primary goal here. Our Debian packages are also used by CI workflows. @BenjaminRodenberg suggested that we use the GHCR for hosting the CI images. This is essentially the same argument as using a separate namespace. In any case, not pushing to a separate image repository helps with the bandwidth. Before continuing, let's try to agree on what images we actually need for which use cases. The system tests don't need any, they are now decoupled from what each repository is doing. A. Regarding the name, if we want Don't forget that we also have the GitHub Action: https://github.com/precice/setup-precice-action This would be a better starting point for CI. Overall, I have the feeling that we have (because of exploration) too many options at the moment. We need to merge and simplify, but first we need to map the needs and use cases. |
This issue morphed into a medusa of a discussion. I'll open separate issues:
The venv story |
Due to changes in https://github.com/precice/python-bindings/blame/develop/.github/workflows/build-docker.yml#L36-L44 the workflow to update the Docker image is triggered on the master branch. This workflow fails on the master branch: https://github.com/precice/python-bindings/actions/runs/7799939617/job/21271746402 because it tries to build the Docker image using the
precice/precice:latest
image, which does not have userprecice
. Previously this did not happen because the workflow was only run for the develop branch.The question now is, should we change something in the Docker recipe of the bindings or of preCICE to address the error, or shall we just build the Docker image of the bindings always on
precice/precice:develop
.The text was updated successfully, but these errors were encountered: