-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ServiceLB not detecting removed loadbalancers #1870
Comments
Are you sure your change is valid? There are a number of gotchas when changing service types, as some keys are or are not allowed to be set in combination with particular service types, and this will cause the update to be rejected. Can you confirm that cluster has actually accepted the service spec update? |
In this particular case I can remember changing I don't know if that particular bug or situation triggered this bug, but it was produced by using the |
I was deploying the following yaml for simple testing, and then editing service type to kind: Service
apiVersion: v1
metadata:
name: helloworld
spec:
type: LoadBalancer
selector:
app: helloworld
ports:
- name: http
protocol: TCP
port: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld
spec:
replicas: 1
selector:
matchLabels:
app: helloworld
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: gcr.io/google-samples/node-hello:1.0
ports:
- containerPort: 8080
protocol: TCP
|
Ah OK - so we're not picking up on the change from LoadBalancer to ClusterIP. The daemonset should probably be torn down immediately when that change occurs. It's not cleaned up at delete because it's not of type LoadBalancer and we didn't expect it to have a daemonset. |
@k3s-io and @bradtopol This is really not acceptable behavior for a production product and is STILL an issue. after more than 1.5 years. |
Validated using v1.23.5-rc1+k3s1Performed the same steps as mentioned above and this is now working
|
(Re-issuing of k3s-io/klipper-lb#3, because i've possibly posted that in the wrong spot after seeing what
ServiceLB
does)Version:
k3s version v1.17.5+k3s1 (58ebdb2a)
K3s arguments:
k3os installation, default arguments, no tampering
k3os version v0.10.1
Describe the bug
Editing a
Service
resource to no longer have aLoadBalancer
as it's type, or removing it, does not get detected, and daemonset pods + iptable rules will not be removed.When a
ServiceLB
daemonset gets removed, the corresponding iptables aren't flushed/removed.To Reproduce
1: Install a clean k3os instance.
2: Add a pod resource that opens a port
3: Add a
LoadBalancer
-typed resource4: Confirm that daemonset pod gets created.
5: Remove pod and service
6: Observe stale daemonset and iptables
Expected behavior
For the daemonset to be removed and the iptables to be cleared of corresponding rules.
Actual behavior
Daemonset stays up despite missing container, iptables rules are stale.
The text was updated successfully, but these errors were encountered: