You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Environmental Info:
K3s Version:
k3s version v1.31.4+k3s1 (a562d09)
go version go1.22.9
Node(s) CPU architecture, OS, and Version:
Linux tom-nuc 6.8.0-49-generic #49~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Nov 6 17:42:15 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
Cluster Configuration:
Single machine install
Describe the bug:
I cannot connect to the internet from within pods. As this is Ubuntu I've disabled UFW.
Steps To Reproduce:
Installed K3s:
Used the quickstart script
I start up Busybox and try to ping or wget google.com
kk run -i --tty --rm debug --image=busybox --restart=Never -- sh
Expected behavior:
I should be able to connect out to the internet.
Actual behavior:
The pod behaves as if sandboxed.
Additional context / logs:
My local dns setup is:
/etc/resolv.conf => nameserver 8.8.8.8
tom@tom-nuc:~$ kk run -i --tty --rm debug --image=busybox --restart=Never -- sh
/ #
/ # cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.43.0.10
options ndots:5
/ # ping google.com
ping: bad address 'google.com'
Also can't ping the dns server: 10.43.0.10
I had a look at the core dns logs in kube-system - lots going there:
==== START logs for container coredns of pod kube-system/coredns-ccb96694c-wx7ww ====
maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
[WARNING] No files matching import glob pattern: /etc/coredns/custom/.override
[WARNING] No files matching import glob pattern: /etc/coredns/custom/.server
[INFO] plugin/kubernetes: pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:243: failed to list *v1.EndpointSlice: Get "https://10.43.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.43.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:243: failed to list *v1.Service: Get "https://10.43.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.43.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:243: failed to list *v1.Namespace: Get "https://10.43.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.43.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:243: failed to list *v1.Namespace: Get "https://10.43.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.43.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:243: failed to list *v1.EndpointSlice: Get "https://10.43.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.43.0.1:443: connect: connection refused
Also had a look at iptables - this is where is got interesting... I have, being burnt several times, setup iptables to pass everything on unrestricted but lo I find a huge amount going on here... - all KUBE related, eg:
KUBE-POD-FW-QDSOJU2OKPP7DKIO all -- anywhere 10.42.0.16 /* rule to jump traffic destined to POD name:gitea-579b69946b-vgx4p namespace: gitea to chain KUBE-POD-FW-QDSOJU2OKPP7DKIO /
KUBE-POD-FW-QDSOJU2OKPP7DKIO all -- 10.42.0.16 anywhere / rule to jump traffic from POD name:gitea-579b69946b-vgx4p namespace: gitea to chain KUBE-POD-FW-QDSOJU2OKPP7DKIO /
KUBE-POD-FW-6OXKOEB4ZFSLRLFJ all -- anywhere 10.42.0.13 / rule to jump traffic destined to POD name:svclb-gitea-ssh-9e6489ae-7xfnz namespace: kube-system to chain KUBE-POD-FW-6OXKOEB4ZFSLRLFJ */
What would be the easiest approach to resetting all this properly? (not via a full reinstall pls) I've tried reinstalling k3s. I'm quite happy to lose the existing pods.
The text was updated successfully, but these errors were encountered:
Environmental Info:
K3s Version:
k3s version v1.31.4+k3s1 (a562d09)
go version go1.22.9
Node(s) CPU architecture, OS, and Version:
Linux tom-nuc 6.8.0-49-generic #49~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Nov 6 17:42:15 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
Cluster Configuration:
Single machine install
Describe the bug:
I cannot connect to the internet from within pods. As this is Ubuntu I've disabled UFW.
Steps To Reproduce:
Used the quickstart script
kk run -i --tty --rm debug --image=busybox --restart=Never -- sh
Expected behavior:
I should be able to connect out to the internet.
Actual behavior:
The pod behaves as if sandboxed.
Additional context / logs:
My local dns setup is:
/etc/resolv.conf => nameserver 8.8.8.8
tom@tom-nuc:~$ kk run -i --tty --rm debug --image=busybox --restart=Never -- sh
/ #
/ # cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.43.0.10
options ndots:5
/ # ping google.com
ping: bad address 'google.com'
Also can't ping the dns server: 10.43.0.10
I had a look at the core dns logs in kube-system - lots going there:
==== START logs for container coredns of pod kube-system/coredns-ccb96694c-wx7ww ====
maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
[WARNING] No files matching import glob pattern: /etc/coredns/custom/.override
[WARNING] No files matching import glob pattern: /etc/coredns/custom/.server
[INFO] plugin/kubernetes: pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:243: failed to list *v1.EndpointSlice: Get "https://10.43.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.43.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:243: failed to list *v1.Service: Get "https://10.43.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.43.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:243: failed to list *v1.Namespace: Get "https://10.43.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.43.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:243: failed to list *v1.Namespace: Get "https://10.43.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.43.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:243: failed to list *v1.EndpointSlice: Get "https://10.43.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.43.0.1:443: connect: connection refused
Also had a look at iptables - this is where is got interesting... I have, being burnt several times, setup iptables to pass everything on unrestricted but lo I find a huge amount going on here... - all KUBE related, eg:
KUBE-POD-FW-QDSOJU2OKPP7DKIO all -- anywhere 10.42.0.16 /* rule to jump traffic destined to POD name:gitea-579b69946b-vgx4p namespace: gitea to chain KUBE-POD-FW-QDSOJU2OKPP7DKIO /
KUBE-POD-FW-QDSOJU2OKPP7DKIO all -- 10.42.0.16 anywhere / rule to jump traffic from POD name:gitea-579b69946b-vgx4p namespace: gitea to chain KUBE-POD-FW-QDSOJU2OKPP7DKIO /
KUBE-POD-FW-6OXKOEB4ZFSLRLFJ all -- anywhere 10.42.0.13 / rule to jump traffic destined to POD name:svclb-gitea-ssh-9e6489ae-7xfnz namespace: kube-system to chain KUBE-POD-FW-6OXKOEB4ZFSLRLFJ */
What would be the easiest approach to resetting all this properly? (not via a full reinstall pls) I've tried reinstalling k3s. I'm quite happy to lose the existing pods.
The text was updated successfully, but these errors were encountered: