-
Notifications
You must be signed in to change notification settings - Fork 203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NP-604: Add loadbalancer service controller #1241
Conversation
@zshi-redhat @fzdarsky PTAL. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pliurh I know this is the first step adding generic load balancer for microshift, do you envision how this approach can be used for other CNI implementations, for example tge bridge-cni
?
We will make the CNI pluggable in 4.13 which allows the use of other CNIs with upstream microshift, I wonder how this loadbalancer controller can be used by other CNIs, does it need to know the CNI type?
We can take a similar approach to what K3S does. In my opinion, there're 2 things we need to do for supporting the LB service in microshift: 1) update the 'status.loadBalancer' field of the service as we do in this patch; 2) do the plumbing job to steer the ingress traffic to the pods. For OVN-K, we only need 1), whereas for other CNIs we may need to do both. In K3S's implementation, it does 2) with a daemonset that can insert iptables rules to the host.
Yes, I suppose we will add a new field in microshift config to specify the CNI type. Then we can add a check in this controller. So that it can know what it needs to do for different CNIs. |
Note our goal in this release cycle is to remove all CNI-specific logic from the MicroShift binary. That is, we cannot add this controller to MicroShift unless it works independently of OVNk / the CNI. If it uses OVNk, we'll need to bundle it with the OVNk CNI RPM package (or think of a separate load balancer plug-in package). |
Where does the plumbing for this happen in OVNk ? My understanding is that for OVNK on regular openshift it was part of the cloud provider redirecting the LB port to the NodePort port on the nodes. But we still need a LB-port -> NodePort-port in the host. We should make this happen in general, not only OVN-k |
I like the general direction the discussion on this review is going. Would it be useful to have a synchronous meeting to review the requirements and how other plans might intersect with this work? Would it help to write up an enhancement to record the decision making process for future maintainers? |
In OVN-K, the plumbing part has already been implemented. It will watch the change in the status of the LB service, then add the iptables rules. https://github.com/openshift/ovn-kubernetes/blob/master/docs/external-ip-and-loadbalancer-ingress.md. |
The logic of the controller is common and required for all the CNIs to support the LB service. It can work with OVN-K out-of-box as OVN-K has already got the plumbing function. For other CNIs, we may need to implement the plumbing logic. It could be adding iptables rule or any other CNI-specific ways. |
I created an enhancement PR at openshift/enhancements#1316. Maybe we can continue our discussion there. |
/retest |
/hold |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to understand a little bit more how that works:
- Admin deploys an app that include Service with type: LoadBalancer
- From K8s perspective LoadBalancer builds on top of NodePort and NodePorts builds on top of ClusterIP: requesting LB causes a NodePort to be set up.
- In cloud environment external LB needs an information about where to direct the traffic which would be the "imaginary NodePort that is part of LoadBalancer svc", right?
Looking at the code it looks like we're leveraging fact that NodePort gets created and just adding NodeIP (from ushift config) to Service's status.loadBalancer.ingress.IP
?
Why would this not work with other CNIs? They handle NodePorts, and we just fill in the ingress IP.
EDIT (also put in separate comment)
Looking at the code it looks like we're leveraging fact that NodePort gets created and just adding NodeIP (from ushift config) to Service's
status.loadBalancer.ingress.IP
? Why would this not work with other CNIs? They handle NodePorts, and we just fill in the ingress IP.
Okay, so I see that beside having NodePort (30xxxx port) it also exposes the port as NodeIP:port (e.g. 8080).
And this is the part that OVN-K handles by itself, but for others we would need something like service/klipper-lb in K3s to set up 192.168.122.X:8080 -> 10.42.0.Y:8080 (Pod), right?
Okay, so I see that beside having NodePort (30xxxx port) it also exposes the port as NodeIP:port (e.g. 8080). |
Correct. |
/test e2e-router-smoke-test |
With this new controller and OVN-K, microshift is able to support k8s services of loadblancer type. The controller is responsible for updating the status.loadBalancer with the node ip and hostname of the local host. OVN-K is responsible for adding the iptables rules accordingly. Signed-off-by: Peng Liu <[email protected]>
/unhold |
/approve |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: pliurh, pmtk, zshi-redhat The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
I approved the enhancement, so we can merge this. /hold cancel |
@pliurh: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
With this new controller and OVN-K, microshift is able to support k8s services of loadblancer type. The controller is responsible for updating the status.loadBalancer with the node ip and hostname of the local host. OVN-K is responsible for adding the iptables rules accordingly.