-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bridge: support DHCP ipam driver #869
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the contribution. The reason we do not support it right now is because we we do not really support layer 2 bridge setup. Our current code always assumes that we created the bridge adn we will also remove it once all interfaces connected to the bridge are removed.
I assume in order to use it you have your external interface connected to the bridge?
Also this assumption means we always create firewall rules to NAT the traffic which does not make sense for these setups usually as you want to talk directly to your gateway without having NAT involved. So I think we would need add proper layer two bridge support first before enabling DHCP for bridge networks.
As far as these changes the same code should not be duplicated for bridge, I suggest to split the common parts out into helper functions.
Hi! Thanks for looking.
Yes, I already have an existing bridge on the host, which is connected to external physical interfaces as well as VMs running on the host. With this patch, I'm able to treat containers the same as VMs, where they appear on the LAN as if they were their own machine.
I don't follow -- there are currently no NAT rules being created as far as I see. My containers are directly accessing the LAN with their assigned DHCP addresses:
Host is 10.0.1.1, another physical machine on the LAN is 10.0.1.3, my container is getting DHCP address 10.0.103.189:
So, this appears to be working exactly as I'd hope. The Netavark debug logs are:
I'm new to Rust and this code, and would prefer if someone more familiar with this could "do it right" :) |
Ah, right I think this is more of a coincident though. Because of the dhcp driver we have no subnets in the ipam struct so the firewall code sees no subnets and doe snot add any firewall rules because of this (This is expected as we also have the ipam driver none were this is wanted). But given this I think that would make it acceptable to me, it clearly solves your usecase correctly. |
iill approve as we wait for the comments to be addressed ... /approve |
7e0eed9
to
c52a907
Compare
Addressed the review comments |
Just in case in helps, I'm trying to get the exact same use case working and this would solve it 👍 |
Thanks. Have you been able to test this? It's been working fine for me and should hopefully be ready to get merged. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think functionally this is fine, however I still have concerns that this is working based on implementation detail; rather than a well defined behaviour.
I think we should expose this as some specific layer2 setup instead, the reason is due our firewall setup and bridge removal. The firewall setup works today with iptables as it doesn't do anything if finds no subnet. However we also have a nftables and firewalld driver that might something in this case.
@mheon WDYT?
All of our firewall code should be IP based (I don't think we can reasonably do NAT or other firewall operations without a source subnet). So I think it is a reasonable assumption that any firewall driver we include should only function in the case that a source subnet (v4 or v6, doesn't really matter, as long as there is at least 1) is set on the bridge. Anything lower-level than that isn't really a firewall anymore, but would arguably be an implementation detail of the lower-level bridge driver. |
Yes the code is ip cased but it will still start to create to tables/chains that then are left empty due to no ips. I rather have a well defined behavior where we do not call into the firewall code at all in this case here so we know we won't mess anything up. |
Yeah, that's reasonable. There's no reason for the firewall code to be called at all if the interface we're configuring doesn't have an IP. |
(Well, a Netavark-managed IP at least) |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: AkiraMoon305, baude, jimparis The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Support DHCP for bridge driver, like macvlan. Signed-off-by: Jim Paris <[email protected]>
Rebased and re-tested on main. Where do we stand on this PR? It's still a feature I use and need for my setup. |
I created #1090 to address the underlying concerns and preferred approach, since those aren't DHCP-specific. |
Hello, just throwing my hat in the ring for this being a desirable feature. This is pretty critical for achieving my desired setup of hosting podman containers alongside nspawn containers and VMs and reverse proxying them all at the host level while also maintaining direct connectivity inside LAN. I see that #1090 is now merged, and this PR hasn't seen activity for a couple months. What's the status? I would offer to contribute myself but it looks like things are already basically at the finish line. |
Yes this should be good to rebase and then add a check that DHCP is only supported with the unmanaged mode. |
Fixes #868