-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CI: Use local cache registry #22726
CI: Use local cache registry #22726
Conversation
Ephemeral COPR build failed. @containers/packit-build please check. |
684512b
to
c82b31a
Compare
test/e2e/pull_test.go
Outdated
@@ -38,7 +38,8 @@ var _ = Describe("Podman pull", func() { | |||
|
|||
session := podmanTest.Podman([]string{"pull", "quay.io/libpod/ibetthisdoesntexist:there"}) | |||
session.WaitWithDefaultTimeout() | |||
Expect(session).To(ExitWithError(125, "nitializing source docker://quay.io/libpod/ibetthisdoesntexist:there: reading manifest there in quay.io/libpod/ibetthisdoesntexist: unauthorized: access to the requested resource is not authorized")) | |||
// FIXME: uncomfortable hardcoding of localhost:56789 | |||
Expect(session).To(ExitWithError(125, "nitializing source docker://quay.io/libpod/ibetthisdoesntexist:there: reading manifest there in localhost:56789/libpod/ibetthisdoesntexist: manifest unknown")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
heads up, tests should still pass locally and I don't think we want to setup a local registry there right?
Thus I would think we need a regex or a Or()
matcher to match both strings anyway
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is thorny. As tests are currently written, e2e tests hard-force the use of test/registries.conf
. No matter where they're run (CI, laptop, anywhere). This may need to be reevaluated, but I'm not bothering with any of that until I find out if this approach is viable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes sure keep testing, just keep in mind that the end result must still work locally
contrib/cirrus/runner.sh
Outdated
set -x | ||
# shellcheck disable=SC2154 | ||
exec bin/podman run --rm --privileged --net=host --cgroupns=host \ | ||
-v `mktemp -d -p /var/tmp`:/var/tmp:Z \ | ||
--tmpfs /tmp:mode=1777 \ | ||
--expose 56789 \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this does not do anything, not sure if you were expecting anything with that or if this serves documentation purposes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I need the container to be able to talk to host:56789. --port
does the opposite, IIRC: host can talk to container. I'll look at logs and see what happens.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The container uses --network=host so it shares the network namespace with the host so from a network POV there should be no functional difference
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Duh. Thanks, I missed that.
03825bc
to
48e506f
Compare
293a60b
to
88fa517
Compare
3bc7dbf
to
80ed2a7
Compare
...to minimize hiccups. RUN-2091 in Jira. Network registries are too unreliable; they cause too many flakes in CI. Here we set up a registry running on each VM, prepopulated with all container images used in podman and buildah tests. Related PRs: containers/podman#22726 containers/buildah#5584 Once those merge, podman and buildah CI tests will fetch images from this local registry. Signed-off-by: Ed Santiago <[email protected]>
9c82836
to
74139bd
Compare
d9379fc
to
64007cb
Compare
64007cb
to
64e176b
Compare
As of containers/automation_images#357 our CI VMs include a local registry preloaded with all(*) images used in tests. * where "all" means "most". This commit installs a new registries.conf that redirects docker and quay to the new local registry. The hope is that this will reduce CI flakes. Since tests change over time, and new tests may require new images, this commit also adds a mechanism for pulling in remote images at test run time. Obviously this negates the purpose of the cache, since it introduces a flake pain point. The idea is: DO NOT DO THIS UNLESS ABSOLUTELY NECESSARY, and then, if we have to do this, hurry up and spin new CI VMs that include the new image(s). Signed-off-by: Ed Santiago <[email protected]>
This commit gets tests working under the new local-registry system: * amend a few image names, mostly just sticking to a consistent list of those images in our registry cache. Mostly minor tag updates. * trickier: pull_test: change some error messages, and remove a test that's now a NOP. Basically, with a local (unprotected) registry we always get "404 manifest unknown"; with a real registry we'll get "403 I can't tell you". * trickiest: seccomp_test: build our own images at run time, with our desired labels. Until now we've been pulling prebuilt images, but those will not copy to the local cache registry. Something about v1? Anyhow, I gave up trying to cache them, and the workaround is straightforward. Also took the liberty of strengthening a few error-message checks Signed-off-by: Ed Santiago <[email protected]>
New tool, get-local-registry-script, intended for developers to get a local registry running in their environment. This is not necessary for any tests, but may be desirable for performance reasons and/or to recreate the CI environment. Signed-off-by: Ed Santiago <[email protected]>
64e176b
to
07b6994
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Once again I think this is ready for review. My thanks for everyone's patience.
session = podmanTest.Podman([]string{"info", "--format", "{{.Host.RemoteSocket.Exists}}"}) | ||
session.WaitWithDefaultTimeout() | ||
Expect(session).Should(ExitCleanly()) | ||
Expect(session.OutputToString()).To(Equal("true")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is, I think, the only change since last review. The latest Debian VM image creates and recreates /run/user/XXX/podman/podman.sock
at (seemingly) random times. I've given up trying to understand what is creating it, and honestly I choose to decide that this test was broken. There is no harm in the socket existing in a podman-local test. The only necessary test is that if podman-remote, then socket-must-exist.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: edsantiago, Luap99 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/lgtm |
/hold cancel (I forgot about that). Thanks. |
As of containers/automation_images#357 our CI VMs include a local registry preloaded with all(*) images used in tests.
(*) where "all" means "most".
This PR sets up CI such that it will use that registry.
Signed-off-by: Ed Santiago [email protected]