-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nftables: internal:0:0-0: Error: Could not process rule: Device or resource busy #23404
Comments
(If this is a netavark bug, could you please copy it there instead of moving? My flake log needs a podman issue number. Thanks.) |
This sounds like the error we are seeing https://bugzilla.redhat.com/show_bug.cgi?id=2013173 but I haven't yet looked if this is something that netavark causes or if there is some other cause. cc @mheon |
So if I read https://wiki.nftables.org/wiki-nftables/index.php/Configuring_chains correctly the EBUSY error just means the chain is not empty when we try to remove it. |
There is also the potential of a race against something else adding rules, though that something can't be Netavark because of locking. |
Well it should be safe to assume that on a CI VM nothing besides netavark would mess with our nftables chains... So if locking works then we have some bug where rules are not deleted properly |
Might be related to this: containers/netavark#1068
This can stabily reproduce the issue on my system (Fedora), doing For |
I don't think it is related to this issue. This is about a specific CI flake and the error is EBUSY not ENOENT like in your case. If it only triggers with the specific port forwarding setup from containers/netavark#1068 then this is most likely the cause for your problem. I take a look next week. |
I am going to close this one as I don't think we saw since then, we can reopen if we see it again. |
I am running into this issue on a fresh Fedora 41 installation (on WSL).
Any ideas what to look for? It's not a race condition in my case, I can't run any container using podman. |
File a new issue on netavark, try running the command with |
Very weird one-off:
Seen twice in one f40 root run.
What's weird about it:
It is possible that this has been happening all along, but ginkgo-retry has been hiding it. We have no sane way to find out, aside from downloading and grepping all logs for all CI runs. Or, as I will suggest in a future Cabal, disabling flake retries.
The text was updated successfully, but these errors were encountered: