-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue with DNS Resolution in Airgapped K3s Cluster Due to UDP Block on Port 53 #11270
Comments
We should cover this in the docs, but yes you can customize the coredns settings via configmap: You can see where in the corefile the various entries are imported: Lines 56 to 77 in 9c32f83
|
@brandond thanks for the quick update. The current resolv.conf file contains the following configuration:
I would like to add custom options to this configuration. For example:
How can I add additional options to resolv.conf? |
That's not the correct thing to do. You don't need to change the resolver options in pods, since pods only talk to coredns. You need to change the coredns settings, as coredns is what is actually querying external resolvers. You need to configure coredns to use TCP instead of UDP. |
The issue lies in the fact that traffic on port 53 UDP to any IP other than localhost is not allowed in the current environment. As a result, we are unable to connect to the kube-dns host on port 53 UDP. To work around this, we intend to configure all pods to use TCP instead of UDP for DNS queries |
You can't even use 53 UDP within the cluster? That's not really going to work, you shouldn't be filtering traffic to the cluster pod or service CIDR ranges at that level. I could see having it blocked outbound to the internet, but if you have something that is filtering traffic within the cluster, you're going to have problems. |
You can try setting resolver options in the pod DNS config, but you'll need to do this for every pod in your cluster. https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-dns-config
|
Yes, that's correct. We are not allowed to connect to any IP addresses, including internal IPs, except for 127.0.0.1, which is has downstream to communicate with the enterprise DNS server. Therefore, we're trying this workaround. Do you see any other options to make it work in such a restricted environment? |
Yes, I tried this at the pod level, but I'm looking for a solution that applies cluster-wide. |
There is no way to alter default DNS options at the cluster level. You could take a look at using something like this to automatically modify pod specs at creation time: https://github.com/redhat-cop/podpreset-webhook Whatever policy you're working with here doesn't really make any sense for use with Kubernetes or containerized workloads in general though. You need to ensure that you don't have anything blocking traffic within the cluster or other things are going to break as well. |
I completely agree with you. Given the current limitations, we’re exploring a workaround. Eventually, we plan to move to a hosted Kubernetes solution where we won’t face these kinds of restrictions. Manually editing the resolv.conf inside the container works, so now we're focused on finding a way to automate this configuration consistently across all pods. |
@brandond can we use --resolv-conf or the $K3S_RESOLV_CONF environment variable to achieve this |
I don't believe so, I don't think the options from the host's resolv.conf have any impact on the resolv.conf in the pods. You need to use spec.dnsConfig for that. |
Environmental Info:
K3s Version: k3s version v1.31.1+k3s1 (452dbbc)
go version go1.22.6
Node(s) CPU architecture, OS, and Version:
x86_64, NAME="Red Hat Enterprise Linux"
VERSION="8.6 (Ootpa)"
Cluster Configuration:
1 master and 1 agent
Describe the bug:
Issue with DNS Resolution in Airgapped K3s Cluster Due to UDP Block on Port 53
Steps To Reproduce:
We have set up an airgapped K3s multi-node cluster, and due to network restrictions, traffic on UDP port 53 is blocked, preventing CoreDNS from resolving hostnames. However, TCP traffic is allowed.
Is there a way to configure or force CoreDNS to use TCP instead of UDP for DNS queries? Additionally, can this configuration be applied to all pods to ensure they query DNS using TCP?
Expected behavior:
We modified the kube-dns entry point and service to use TCP.
Pods should be able to resolve svc names over TCP instead of UDP
Actual behavior:
Despite these changes, ping and nslookup commands continue to use UDP for DNS queries.
Additional context / logs:
The text was updated successfully, but these errors were encountered: