-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test flakes tracker #579
Comments
There's only one Zone we can ues because RHEL needs internal access to install podman to run We had get available Zone for non-rhel test https://gitlab.com/fedora/bootc/tests/bootc-workflow-test/-/blob/2bebcdd18f4e0ff9639aff59e2fdfdfcec70f450/playbooks/deploy-aws.yaml#L55.
That's the things I'd like to talk with you on Monday QE sync meeting. |
OK, got it. Well...per the other discussion, what if we focused only on fedora:40 and centos:stream9 for PR testing by default, and did rhel integration testing both post merge (I'll get the |
I agree.
Just like you mentioned above, I'd not suggest to add testing in https://gitlab.com/redhat/centos-stream/rpms/bootc/ to avoid release block. From my perspective, all tests should be run before release, not on release. |
Recently, let's say last week, this error has been found more times. Automation added 3-times |
In a different run, we somehow ended up with
Which seems related but different from the other one:
Actually, having it be 1M sometimes and 512M others looks very much like we're getting partitions swapped. |
The test is facing |
@henrywang anything we can do to fix/improve
Having some basically permanently-red CI is a mental overhead to check each time which specific jobs are failing. |
Yes, have issue https://issues.redhat.com/browse/TFT-2691 to track. |
I didn't try to stress test this much, but I think #698 is going to help. At the very least if we are still racing somehow, we'll get a more clear error message. |
I think that fixed the install flake, haven't seen it since. |
Recently, Failed log example: |
I noted this one over in the ostree-rs-ext tracker, it's likely related to the other similar issues around broken pipes. |
This issue looks only exists on bare metal machine (testing farm public ranch runs virtualization test on AWS bare metal instance). I can't find same issue on nested virtualization environment, I mean run same test script. |
Hmm could be. Any idea what kind of storage is used on the baremetal instances? I'm thinking of trying to reproduce in a virtualized environment by attaching the disk via nbd and then using the |
Hi @jeckersb, Do you know any workaround for issue |
@henrywang isn't that #509 (comment) ? Is the input image zstd:chunked? Is it a RHEL10 system? |
It's C10S system. Yeah, same thing as RHEL 10. The following workaround might work? Thanks. if [[ "${REDHAT_VERSION_ID%%.*}" == "10" ]]; then
sed -i 's/^compression_format = .*/compression_format = "gzip"/' /usr/share/containers/containers.conf
fi |
Yep per #509 (comment) that's what the new default will be, hopefully soon |
I think we're good on this! |
two minor patches + c9s compat
store: Cleanup `copy` function
Parsing layer blob: Broken pipe
This one is like my enemy! I have a tracker over here for it coreos/rpm-ostree#4567 too
The text was updated successfully, but these errors were encountered: