You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
mkdir backup_platform_state
cd backup_platform_state
systemctl stop k3s
# backup
cp -a /var/lib/rancher/k3s/server/db .
cp -a /var/lib/rancher/k3s/server/token .
systemctl restart k3s
# restore
systemctl stop k3s
rm -rf /var/lib/rancher/k3s/server/db
mv db /var/lib/rancher/k3s/server/
mv token /var/lib/rancher/k3s/server/
systemctl restart k3s
Steps To Reproduce:
Installed K3s:
Expected behavior:
No need to restart the coredns pod.
Actual behavior:
DNS resolution is not working, some of my init containers, which call for specified service to check if it's ready is failing because of bad address error. To make it work again I need to restart the coredns pod.
nc: bad address 'service-1'
wait...
nc: bad address 'service-1'
wait...
nc: bad address 'service-1'
wait...
nc: bad address 'service-1'
wait...
nc: bad address 'service-1'
wait...
kubectl -n test get svc | grep service
service-1 ClusterIP 10.43.186.252 <none> 2181/TCP,2888/TCP,3888/TCP
Additional context / logs:
The only logs I have found in coredns pod:
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: the server is currently unable to handle the request (get namespaces)
[INFO] plugin/kubernetes: Trace[1776613576]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169 (04-Sep-2024 09:52:23.840) (total time: 43271ms):
Trace[1776613576]: ---"Objects listed" error:<nil> 43271ms (09:53:07.112)
Trace[1776613576]: [43.271393766s] [43.271393766s] END
The text was updated successfully, but these errors were encountered:
If you're testing by just stopping K3s on an existing node, and replacing the DB file, you should also run k3s-killall.sh to force an immediate restart of the pods - unless you want to wait for them to go unhealthy and get recreated.
Environmental Info:
K3s Version:
Node(s) CPU architecture, OS, and Version:
Cluster Configuration: 1 node (master)
Describe the bug:
I have followed the guide from https://docs.k3s.io/datastore/backup-restore#backup-and-restore-with-sqlite.
During restore process, some of my pods are not up, because their init containers are stuck on checking if specified service is up (bad address error).
Steps To Reproduce:
Expected behavior:
No need to restart the coredns pod.
Actual behavior:
DNS resolution is not working, some of my init containers, which call for specified service to check if it's ready is failing because of bad address error. To make it work again I need to restart the coredns pod.
Additional context / logs:
The only logs I have found in coredns pod:
The text was updated successfully, but these errors were encountered: