Unable to connect to pod's container IP; works in Docker #3915
-
Environmental Info: Node(s) CPU architecture, OS, and Version: Linux k8s-01 5.11.0-1016-raspi #17-Ubuntu SMP PREEMPT aarch64 GNU/Linux Cluster Configuration: 1 node, 1 server, out-of-the-box setup My pod runs a DNS resolver on port 53. From a shell in the same container, I can connect to the resolver on 127.0.0.1, but NOT on the container's IP address (10.42.0.99). Running the same image in Docker, I can connect to either IP address. I've attached steps to reproduce the failure in k3s and the success in Docker. Just to add to the mystery, I installed nmap and it claims port 53 is open on the container IP -- but nslookup nevertheless times out trying to connect to it. Steps To Reproduce:
Expected behavior:
Actual behavior:
Additional context / logs: I've attached the deployment/service YAML file that I used, renamed to dnscache.txt because the .yaml was rejected. Backporting
|
Beta Was this translation helpful? Give feedback.
Replies: 10 comments 10 replies
-
I don't understand what you're trying to do here. This will get you the IP address of the host, not the IP adress of the container. If this really works with your docker setup (which you haven't shown us), then you surely mapped the port 53/udp to your host. There are multiple ways to do that with Kubernetes. For example, when using K3s, you can use the integrated load balancer (klipper-lb). To use that, change the type of your Service to LoadBalancer: apiVersion: v1
kind: Service
metadata:
name: dnscache
spec:
selector:
app: dnscache
ports:
- name: dnscache-53-udp
port: 53
protocol: UDP
targetPort: 53
type: LoadBalancer Alternatively, you could ditch the Service altogether and just bind the Pod directly to the host by adding the apiVersion: apps/v1
kind: Deployment
metadata:
name: dnscache
labels:
app: dnscache
spec:
replicas: 1
selector:
matchLabels:
app: dnscache
strategy:
type: Recreate
template:
metadata:
labels:
app: dnscache
spec:
containers:
- name: dnscache
image: budney/djbdns
env:
- name: delegate
value: lan.jeenyus.net:153.18.172.in-addr.arpa
- name: service
value: dnscache
ports:
- name: dnscache-53-udp
containerPort: 53
hostPort: 53
protocol: UDP
resources: {}
volumeMounts:
- mountPath: /docker-compose.yml
name: dnscache-hostpath0
readOnly: true
restartPolicy: Always
volumes:
- hostPath:
path: /etc/djbdns/docker-compose.yml
name: dnscache-hostpath0 Anyway, why are you mounting a docker-compose-file into the container? Edit: You seem to be new to Kubernetes and this looks more like a question than an issue. I think this issue should be moved to "Discussions". |
Beta Was this translation helpful? Give feedback.
-
I can run this image in Docker, then exec a shell in the same container, and then contact the DNS server on either 127.0.0.1 or on the IP address of But if I run the same image using k3s, I can contact the server on 127.0.0.1 but not on the IP address of In other words the DNS server inside the container is binding all interfaces (0.0.0.0), but cannot receive any connections on eth0, even from within the container itself. Host ports aren't involved in any way at all; that's why I ran a shell inside the server's own container. Before installing k3s, I had the service working just fine using docker-compose. After banging my head against the wall with k3s, I think I've been able to show that this is actually a bug of some sort. The first thing I tried was using LoadBalancer, but it's not going to do much if even processes inside the container itself can't connect to port 53. ...unless there's something I missed, which is entirely possible. |
Beta Was this translation helpful? Give feedback.
-
I reread this:
I think what you missed was that I'm running Step #5 of "steps to reproduce" was:
That runs a shell in the container. The steps that come after it are performed inside that shell. |
Beta Was this translation helpful? Give feedback.
-
Same result: connection timed out. |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
I am sorry, I totally skipped over this part:
This is very interesting and I never had this kind of issue. I think this is the core issue here, and everything I've said about host ports and stuff is irrelevant. Edit: Please give me a few minutes to reproduce exactly your setup. |
Beta Was this translation helpful? Give feedback.
-
This is all running on Ubuntu 21.04, with image |
Beta Was this translation helpful? Give feedback.
-
dnscache doesn't respond to anything other than 127.0.0.1 by default. As per the docs:
|
Beta Was this translation helpful? Give feedback.
-
I got it! Look at the files inside Docker:
K3s:
As @brandond correctly stated in #3915 (comment), these file names describe the prefixes of the ip addresses that the dnscache process responds to. So, where does the broken
The issue is that the first line actually returns So, in the end it comes down to a buggy entrypoint script of this dnscache image. Edit: Running |
Beta Was this translation helpful? Give feedback.
-
Oh my goodness. Thank you for spotting that! I've built new docker images and now the service works perfectly. Switching it to LoadBalancer also makes it accessible from the node's IP address. Thanks! |
Beta Was this translation helpful? Give feedback.
I got it!
Look at the files inside
/srv/dnscache/root/ip
:Docker:
K3s:
As @brandond correctly stated in #3915 (comment), these file names describe the prefixes of the ip addresses that the dnscache process responds to.
So, where does the broken
10?10
come from? Actually from the entrypoint script at/start.sh
, which does this:The issue is that the first line actually returns
10
twice in K3s, with a line break in between.So, in the end it comes down to a buggy entrypoint script of …