Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFE] Support bridges created outside of netavark #1090

Closed
mwinters0 opened this issue Oct 2, 2024 · 11 comments · Fixed by #1126
Closed

[RFE] Support bridges created outside of netavark #1090

mwinters0 opened this issue Oct 2, 2024 · 11 comments · Fixed by #1126

Comments

@mwinters0
Copy link

Some users want to connect containers to a bridge that they have created outside of podman/netavark , e.g. so that VMs and containers can talk to each other. According to the comments in #868 we do not currently support this configuration:

Our current code always assumes that we created the bridge and we will also remove it once all interfaces connected to the bridge are removed.

The patch in that PR does work, but only coincidentally due to implementation details, and it leaves some cruft behind such as tables/chains that are left empty due to no IPs. We instead should explicitly support this configuration and not call the firewall code at all if the interface has no netavark-assigned IP.

@RushingAlien
Copy link

RushingAlien commented Oct 2, 2024

I'd like to add that my case is for a bridge-to-lan setup where the bridge acts as an L2 layer so VMs and Containers can access the campus LAN directly instead of adding another L3 virtual net

@RushingAlien
Copy link

RushingAlien commented Oct 2, 2024

I'm thinking of adding a boolean property: managed

true is default and means netavark will manage the bridge
false means netavark will not manage the bridge, for when bridge is setup externally.

Upon podman network create if interface-name already exists, then managed becomes false automatically

@mwinters0
Copy link
Author

mwinters0 commented Oct 2, 2024

@RushingAlien FYI your use case can be handled by using the macvlan driver in bridge mode or the ipvlan driver (depending on what networking hardware you're connecting to and the policy that was set there). This issue is for people who specifically want to use the bridge driver to accomplish the same and/or other use cases. (Maybe that's you - I'm not sure. Just wanted to make you and any others who might drive by aware.)

Have a look at podman-network-create(1) and/or this docker macvlan tutorial.

@RushingAlien
Copy link

It worked, thanks :D.

On another note, Will rootless containers be able to use the exisitng bridge? if so can dhcp work for these rootless containers connected to the bridge?

@Luap99
Copy link
Member

Luap99 commented Oct 4, 2024

No rootless networking cannot use any host interfaces, be it macvlan or bridge, we simply have no privileges and thus the dhcp proxy will also never work there. Rootless networking runs always in a isolated namesapce, see containers/podman#22943 (comment) for the architecture

@M1cha
Copy link
Contributor

M1cha commented Nov 12, 2024

@Luap99 What do you think about unmanaged bridge support? I also need it, because macvlans can't communicate with the host (which in my case is a router, so it has to. And forwarding is not an option because I want NAT-free IPv6).

I can implement this, once we agreed on the details. What's not yet clear to me is what we want to do with the firewall rules and all the sysctl settings which are currently changed(I don't fully understand why that's done). If we disable too much one could ask why we even want to use the bridge driver instead of a new one(a plugin?).

@Luap99
Copy link
Member

Luap99 commented Nov 12, 2024

I was thinking of a network option like mode that could bet set to l2 (layer2) or l3 (layer3), in the layer2 code we simply skip the firewall code and skip the remove bridge step on last container shutdown (well once technically when there are no other interfaces attached to the bridge). For the sysctls I would need to take a closer looks, we still would want to turn on routing I guess. But maybe we should disable ipv6 dad and SLAAC sysctls.

In this case all firewall rules would need to be managed by users which I think is the normal expected use case if you share a bridge with VMs and containers.

@Luap99
Copy link
Member

Luap99 commented Nov 12, 2024

cc @mheon In case you have onions here

@mheon
Copy link
Member

mheon commented Nov 12, 2024

My only question is user experience. How do we expose this in a way that makes sense and doesn't introduce a lot of complexity?

@Luap99
Copy link
Member

Luap99 commented Nov 12, 2024

podman network create --opt mode=l2 if we just parse it as normal option like mtu for example.

@mheon
Copy link
Member

mheon commented Nov 12, 2024

Fair enough... I'm not crazy about the idea, but code-wise it shouldn't be much burden to maintain

M1cha added a commit to M1cha/netavark that referenced this issue Nov 13, 2024
While Linux doesn't support modes on bridges, we use this concept to let
the user tell us if they want podman/netavark to own the bridge or not.
L3 behaves the same way as before this commit. L2 requires the bridge to
exist already, will not setup any sysctls or firewall rules on the host
and will not delete the bridge once all containers left.

Fixes containers#1090

Signed-off-by: Michael Zimmermann <[email protected]>
M1cha added a commit to M1cha/netavark that referenced this issue Nov 13, 2024
While Linux doesn't support modes on bridges, we use this concept to let
the user tell us if they want podman/netavark to own the bridge or not.
L3 behaves the same way as before this commit. L2 requires the bridge to
exist already, will not setup any sysctls or firewall rules on the host
and will not delete the bridge once all containers left.

Fixes containers#1090

Signed-off-by: Michael Zimmermann <[email protected]>
M1cha added a commit to M1cha/netavark that referenced this issue Nov 14, 2024
While Linux doesn't support modes on bridges, we use this concept to let
the user tell us if they want podman/netavark to own the bridge or not.
L3 behaves the same way as before this commit. L2 requires the bridge to
exist already, will not setup any sysctls or firewall rules on the host
and will not delete the bridge once all containers left.

Fixes containers#1090

Signed-off-by: Michael Zimmermann <[email protected]>
M1cha added a commit to M1cha/netavark that referenced this issue Nov 19, 2024
While Linux doesn't support modes on bridges, we use this concept to let
the user tell us if they want podman/netavark to own the bridge or not.
L3 behaves the same way as before this commit. L2 requires the bridge to
exist already, will not setup any sysctls or firewall rules on the host
and will not delete the bridge once all containers left.

Fixes containers#1090

Signed-off-by: Michael Zimmermann <[email protected]>
M1cha added a commit to M1cha/netavark that referenced this issue Nov 19, 2024
While Linux doesn't support modes on bridges, we use this concept to let
the user tell us if they want podman/netavark to own the bridge or not.
Managed behaves the same way as before this commit. Unmanaged requires
the bridge to exist already, will not setup any sysctls or firewall
rules on the host and will not delete the bridge once all containers
left.

Fixes containers#1090

Signed-off-by: Michael Zimmermann <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants