You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I just had this issue where i had to restart my node 2 times and flush iptables both times:
The problem
I kept constantly connecting to an internal pod's ssh port because of a LoadBalancer service that hooked it up to that port.
No problem, I just set that service's type to ClusterIP or NodePort.
I still kept connecting to the pod's sshd service.
I dug into the iptables, doing iptables-save and grepping through that file to see that it still displayed the rules required to go from "hostport" to that service.
Fine, I'll take note of that and do iptables -F, and then immidiately restart.
After restarting, I could quickly but temporarily connect to the "right" sshd service, but after disconnecting and reconnecting again i found myself at the warning of a "wrong" host key, signaling that the node had somehow highjacked my ssh port again.
At this stage i looked at my pods, and found that the "sidecar" pod for the once-loadbalancer-service still existed, i deleted it, but because it was owned by a daemonset, i had to delete that too.
I tested the port again, it was still in the iptables, I flushed and restarted.
This time, the port finally connected properly even after the pods went up.
Obeservation
klipper-lb creates "sidecar" daemonset pods to every loadbalancer service observed, but it does not remove these once they're not required, nor does klipper take care of the leftover iptables.
The text was updated successfully, but these errors were encountered:
(I use k3os with klipper-lb installed by default)
I just had this issue where i had to restart my node 2 times and flush iptables both times:
The problem
I kept constantly connecting to an internal pod's
ssh
port because of aLoadBalancer
service that hooked it up to that port.No problem, I just set that service's type to
ClusterIP
orNodePort
.I still kept connecting to the pod's sshd service.
I dug into the iptables, doing
iptables-save
and grepping through that file to see that it still displayed the rules required to go from "hostport" to that service.Fine, I'll take note of that and do
iptables -F
, and then immidiately restart.After restarting, I could quickly but temporarily connect to the "right" sshd service, but after disconnecting and reconnecting again i found myself at the warning of a "wrong" host key, signaling that the node had somehow highjacked my ssh port again.
At this stage i looked at my pods, and found that the "sidecar" pod for the once-loadbalancer-service still existed, i deleted it, but because it was owned by a daemonset, i had to delete that too.
I tested the port again, it was still in the iptables, I flushed and restarted.
This time, the port finally connected properly even after the pods went up.
Obeservation
klipper-lb
creates "sidecar" daemonset pods to every loadbalancer service observed, but it does not remove these once they're not required, nor does klipper take care of the leftover iptables.The text was updated successfully, but these errors were encountered: