-
Hej All Recently we have notices one thing and I would like to see if that is expected behavior or are we missing some configuration. In case core DNS in the cluster fails, outgoing pod connections start to look 10.43.x.x address outside k8s on the main LAN interface of the Linux nodes. In one case, this leads to network saturation outside k8s and crash of the network. We have later seen that this can also happen let say in case pod running DB fails and pod calling the pod DB service starts to look out side of k8s network. Even when he is looking for services that are on address on 10.43.x.x |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 8 replies
-
Can you explain more about what specifically you're seeing? Are you just seeing DNS requests for cluster addresses hitting external DNS servers, or are you actually somehow seeing un-NATed traffic to pod or service addresses hitting the rest of your network? |
Beta Was this translation helpful? Give feedback.
-
@brandond For test I have one server which is both master and worker node for rke2. With tcpdump we monitor all traffic going from VM on main interface to the 10.43.0.0/16: let say I kill rke2-coredns-rke2-coredns, tcpdump start to log following traffic on main interface: When coredns recovers this DNS "spilling" stops. Furthermore, let say we have container A and container B. On one of our on prem test system QA left some pods in failed state over holidays. |
Beta Was this translation helpful? Give feedback.
-
No custom iptables rules. We are running vanilla RHEL 8.6 with with disabled firewalled service. We have both iptables and firewalled installed. Where firewalled is disabled as service: Output of iptables rules: List of all packages: |
Beta Was this translation helpful? Give feedback.
That is a really old and end of life version of rke2. Can you reproduce this on any more recent release? 1.26+?