Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with DNS Resolution in Airgapped K3s Cluster Due to UDP Block on Port 53 #11270

Closed
mahesh-kore opened this issue Nov 8, 2024 · 12 comments

Comments

@mahesh-kore
Copy link

Environmental Info:
K3s Version: k3s version v1.31.1+k3s1 (452dbbc)
go version go1.22.6

Node(s) CPU architecture, OS, and Version:
x86_64, NAME="Red Hat Enterprise Linux"
VERSION="8.6 (Ootpa)"

Cluster Configuration:
1 master and 1 agent

Describe the bug:
Issue with DNS Resolution in Airgapped K3s Cluster Due to UDP Block on Port 53

Steps To Reproduce:
We have set up an airgapped K3s multi-node cluster, and due to network restrictions, traffic on UDP port 53 is blocked, preventing CoreDNS from resolving hostnames. However, TCP traffic is allowed.

Is there a way to configure or force CoreDNS to use TCP instead of UDP for DNS queries? Additionally, can this configuration be applied to all pods to ensure they query DNS using TCP?

  • Installed K3s:

Expected behavior:
We modified the kube-dns entry point and service to use TCP.
Pods should be able to resolve svc names over TCP instead of UDP

Actual behavior:
Despite these changes, ping and nslookup commands continue to use UDP for DNS queries.

Additional context / logs:

@brandond
Copy link
Member

brandond commented Nov 8, 2024

Is there a way to configure or force CoreDNS to use TCP instead of UDP for DNS queries? Additionally, can this configuration be applied to all pods to ensure they query DNS using TCP?

We should cover this in the docs, but yes you can customize the coredns settings via configmap:
#4397

You can see where in the corefile the various entries are imported:

.:53 {
errors
health
ready
kubernetes %{CLUSTER_DOMAIN}% in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
hosts /etc/coredns/NodeHosts {
ttl 60
reload 15s
fallthrough
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
import /etc/coredns/custom/*.override
}
import /etc/coredns/custom/*.server

@brandond brandond closed this as completed Nov 8, 2024
@github-project-automation github-project-automation bot moved this from New to Done Issue in K3s Development Nov 8, 2024
@mahesh-kore
Copy link
Author

mahesh-kore commented Nov 8, 2024

@brandond thanks for the quick update.

The current resolv.conf file contains the following configuration:

search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.43.0.10
options ndots:5 

I would like to add custom options to this configuration. For example:

search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.43.0.10
options ndots:5 use-vc

How can I add additional options to resolv.conf?

@brandond
Copy link
Member

brandond commented Nov 8, 2024

That's not the correct thing to do. You don't need to change the resolver options in pods, since pods only talk to coredns. You need to change the coredns settings, as coredns is what is actually querying external resolvers. You need to configure coredns to use TCP instead of UDP.

@mahesh-kore
Copy link
Author

mahesh-kore commented Nov 8, 2024

The issue lies in the fact that traffic on port 53 UDP to any IP other than localhost is not allowed in the current environment. As a result, we are unable to connect to the kube-dns host on port 53 UDP. To work around this, we intend to configure all pods to use TCP instead of UDP for DNS queries

@brandond
Copy link
Member

brandond commented Nov 8, 2024

You can't even use 53 UDP within the cluster? That's not really going to work, you shouldn't be filtering traffic to the cluster pod or service CIDR ranges at that level.

I could see having it blocked outbound to the internet, but if you have something that is filtering traffic within the cluster, you're going to have problems.

@brandond
Copy link
Member

brandond commented Nov 8, 2024

You can try setting resolver options in the pod DNS config, but you'll need to do this for every pod in your cluster.

https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-dns-config

options: an optional list of objects where each object may have a name property (required) and a value property (optional). The contents in this property will be merged to the options generated from the specified DNS policy. Duplicate entries are removed.

@mahesh-kore
Copy link
Author

Yes, that's correct. We are not allowed to connect to any IP addresses, including internal IPs, except for 127.0.0.1, which is has downstream to communicate with the enterprise DNS server. Therefore, we're trying this workaround. Do you see any other options to make it work in such a restricted environment?

@mahesh-kore
Copy link
Author

You can try setting resolver options in the pod DNS config, but you'll need to do this for every pod in your cluster.

https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-dns-config

options: an optional list of objects where each object may have a name property (required) and a value property (optional). The contents in this property will be merged to the options generated from the specified DNS policy. Duplicate entries are removed.

Yes, I tried this at the pod level, but I'm looking for a solution that applies cluster-wide.

@brandond
Copy link
Member

brandond commented Nov 8, 2024

There is no way to alter default DNS options at the cluster level. You could take a look at using something like this to automatically modify pod specs at creation time: https://github.com/redhat-cop/podpreset-webhook

Whatever policy you're working with here doesn't really make any sense for use with Kubernetes or containerized workloads in general though. You need to ensure that you don't have anything blocking traffic within the cluster or other things are going to break as well.

@mahesh-kore
Copy link
Author

mahesh-kore commented Nov 8, 2024

I completely agree with you. Given the current limitations, we’re exploring a workaround. Eventually, we plan to move to a hosted Kubernetes solution where we won’t face these kinds of restrictions.

Manually editing the resolv.conf inside the container works, so now we're focused on finding a way to automate this configuration consistently across all pods.

@mahesh-kore
Copy link
Author

@brandond can we use --resolv-conf or the $K3S_RESOLV_CONF environment variable to achieve this

@brandond
Copy link
Member

brandond commented Nov 8, 2024

I don't believe so, I don't think the options from the host's resolv.conf have any impact on the resolv.conf in the pods. You need to use spec.dnsConfig for that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done Issue
Development

No branches or pull requests

2 participants