Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

etcdctl get: "Error: connection error: connection closed" #8647

Closed
mshiriv opened this issue Jul 5, 2020 · 12 comments
Closed

etcdctl get: "Error: connection error: connection closed" #8647

mshiriv opened this issue Jul 5, 2020 · 12 comments
Labels
co/etcd startup failures where etcd may be involved kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@mshiriv
Copy link

mshiriv commented Jul 5, 2020

I installed kuberctl, minikube, and VirtualBox on ubuntu 20.04 LTS:

versions
virtualbox: VirtualBox Graphical User Interface Version 6.1.6_Ubuntu r137129
OS: Ubuntu 20.04 LTS (Focal Fossa)
kernel: 5.4.0-40-generic
kubectl: Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}"
minikube: v1.11.0 commit: 57e2f55

$ minikube start --driver=virtualbox
😄  minikube v1.11.0 on Ubuntu 20.04
✨  Using the virtualbox driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating virtualbox VM (CPUs=2, Memory=3900MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube"

After up and running minikube, I tried to get from etcd using kubectl:

$ kubectl -n kube-system exec etcd-minikube -- etcdctl  get /                  
{"level":"warn","ts":"2020-07-05T09:30:08.933Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"endpoint://client-b59e5283-9627-4de4-a41c-e70fde4dc000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest connection error: connection closed"}
Error: context deadline exceeded
command terminated with exit code 1

And also try accessing to etcd by ignoring TLS verification, but I got the same error:

$ kubectl -n kube-system exec etcd-minikube -- etcdctl --insecure-skip-tls-verify get /
{"level":"warn","ts":"2020-07-05T09:31:28.715Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"endpoint://client-d64a5fb7-77c4-4755-9645-f1a34fb521c1/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest connection error: connection closed"}
Error: context deadline exceeded
command terminated with exit code 1

Here are minikube logs:

$ minikube logs | grep -i TLS   
2020-07-05 09:35:12.000948 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
2020-07-05 09:35:12.994776 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
I0705 09:35:17.188326       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0705 09:35:17.325700       1 tlsconfig.go:240] Starting DynamicServingCertificateController
F0705 09:35:38.482109       1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: net/http: TLS handshake timeout

The last error log related to storage-provisioner.

@medyagh medyagh changed the title etcd-minikube Error: context deadline exceeded virtualbox: etcd-minikube Error: context deadline exceeded Jul 7, 2020
@medyagh medyagh added the addon/storage-provisioner Issues relating to storage provisioner addon label Jul 7, 2020
@medyagh
Copy link
Member

medyagh commented Jul 7, 2020

@mahmoudshirivaramini thank you very much for provding the logs,

do you mind trying to see if you get exact same problem with Docker driver?

minikube delete
minikube start --driver=docker --wait=all

also do you mind sharing what was the reason you were trying to access etcd manually ?

and also could you plz try it with --wait=all so minikube waits for all the components?

@medyagh
Copy link
Member

medyagh commented Jul 7, 2020

/triage needs-information
/triage support

@k8s-ci-robot k8s-ci-robot added triage/needs-information Indicates an issue needs more information in order to work on it. kind/support Categorizes issue or PR as a support question. labels Jul 7, 2020
@mshiriv
Copy link
Author

mshiriv commented Jul 8, 2020

@medyagh I tried with Docker driver:

$ minikube delete 
🔥  Deleting "minikube" in docker ...
🔥  Deleting container "minikube" ...
🔥  Removing /home/mahmoud/.minikube/machines/minikube ...
💀  Removed all traces of the "minikube" cluster.
$ minikube start --driver=docker --wait=all
😄  minikube v1.11.0 on Ubuntu 20.04
✨  Using the docker driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating docker container (CPUs=2, Memory=3900MB) ...
🐳  Preparing Kubernetes v1.18.3 on Docker 19.03.2 ...
    ▪ kubeadm.pod-network-cidr=10.244.0.0/16
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube"

But got the same error:

$ kubectl -n kube-system exec etcd-minikube -- etcdctl  get / && echo $? 
{"level":"warn","ts":"2020-07-08T08:35:35.943Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"endpoint://client-2601d88b-c3c1-41ad-af2b-9bfbbfc83d61/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest connection error: connection closed"}
Error: context deadline exceeded
command terminated with exit code 1

I was teaching others etcd architecture and just tried to get all the keys and their values, so the reason is learning.
Here's Docker engine configuration that's default:

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3

# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

@mshiriv
Copy link
Author

mshiriv commented Jul 8, 2020

Minikube log attached.

$ minikube logs > minikube.log

minikube.log

@medyagh
Copy link
Member

medyagh commented Jul 10, 2020

thank you @mahmoudshirivaramini for providing detailed reproducible information, I confirm that I also get the same result as you.

$ kubectl -n kube-system exec etcd-minikube -- etcdctl  get / 
{"level":"warn","ts":"2020-07-10T18:28:17.301Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"endpoint://client-10bbffd3-802d-4c54-8701-835f39b38ff8/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest connection error: connection closed"}
Error: context deadline exceeded
command terminated with exit code 1

however etcd pod itself seem to be running healthy

medya@~/workspace/minikube (restart_msg1) $ kc get pods -A
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   etcd-minikube                      1/1     Running   0          81s

and here is the logs for etcd pod


medya@~/workspace/minikube (restart_msg1) $ kc logs etcd-minikube -n kube-system
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-07-10 18:27:34.794668 I | etcdmain: etcd Version: 3.4.3
2020-07-10 18:27:34.794715 I | etcdmain: Git SHA: 3cf2f69b5
2020-07-10 18:27:34.794726 I | etcdmain: Go Version: go1.12.12
2020-07-10 18:27:34.794738 I | etcdmain: Go OS/Arch: linux/amd64
2020-07-10 18:27:34.794754 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-07-10 18:27:34.794987 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
2020-07-10 18:27:34.795992 I | embed: name = minikube
2020-07-10 18:27:34.796028 I | embed: data dir = /var/lib/minikube/etcd
2020-07-10 18:27:34.796044 I | embed: member dir = /var/lib/minikube/etcd/member
2020-07-10 18:27:34.796060 I | embed: heartbeat = 100ms
2020-07-10 18:27:34.796086 I | embed: election = 1000ms
2020-07-10 18:27:34.796102 I | embed: snapshot count = 10000
2020-07-10 18:27:34.796145 I | embed: advertise client URLs = https://172.17.0.3:2379
2020-07-10 18:27:34.870638 I | etcdserver: starting member b273bc7741bcb020 in cluster 86482fea2286a1d2
raft2020/07/10 18:27:34 INFO: b273bc7741bcb020 switched to configuration voters=()
raft2020/07/10 18:27:34 INFO: b273bc7741bcb020 became follower at term 0
raft2020/07/10 18:27:34 INFO: newRaft b273bc7741bcb020 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
raft2020/07/10 18:27:34 INFO: b273bc7741bcb020 became follower at term 1
raft2020/07/10 18:27:34 INFO: b273bc7741bcb020 switched to configuration voters=(12858828581462913056)
2020-07-10 18:27:34.978919 W | auth: simple token is not cryptographically signed
2020-07-10 18:27:34.982459 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
2020-07-10 18:27:34.984803 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
2020-07-10 18:27:34.984919 I | embed: listening for metrics on http://127.0.0.1:2381
2020-07-10 18:27:34.985463 I | etcdserver: b273bc7741bcb020 as single-node; fast-forwarding 9 ticks (election ticks 10)
2020-07-10 18:27:34.985764 I | embed: listening for peers on 172.17.0.3:2380
raft2020/07/10 18:27:34 INFO: b273bc7741bcb020 switched to configuration voters=(12858828581462913056)
2020-07-10 18:27:34.985970 I | etcdserver/membership: added member b273bc7741bcb020 [https://172.17.0.3:2380] to cluster 86482fea2286a1d2
raft2020/07/10 18:27:35 INFO: b273bc7741bcb020 is starting a new election at term 1
raft2020/07/10 18:27:35 INFO: b273bc7741bcb020 became candidate at term 2
raft2020/07/10 18:27:35 INFO: b273bc7741bcb020 received MsgVoteResp from b273bc7741bcb020 at term 2
raft2020/07/10 18:27:35 INFO: b273bc7741bcb020 became leader at term 2
raft2020/07/10 18:27:35 INFO: raft.node: b273bc7741bcb020 elected leader b273bc7741bcb020 at term 2
2020-07-10 18:27:35.874168 I | etcdserver: setting up the initial cluster version to 3.4
2020-07-10 18:27:35.874888 N | etcdserver/membership: set the initial cluster version to 3.4
2020-07-10 18:27:35.874962 I | etcdserver/api: enabled capabilities for version 3.4
2020-07-10 18:27:35.875352 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.3:2379]} to cluster 86482fea2286a1d2
2020-07-10 18:27:35.875668 I | embed: ready to serve client requests
2020-07-10 18:27:35.877006 I | embed: serving client requests on 172.17.0.3:2379
2020-07-10 18:27:35.877089 I | embed: ready to serve client requests
2020-07-10 18:27:35.879539 I | embed: serving client requests on 127.0.0.1:2379
2020-07-10 18:27:55.658805 I | embed: rejected connection from "127.0.0.1:54004" (error "tls: first record does not look like a TLS handshake", ServerName "")
2020-07-10 18:27:56.659789 I | embed: rejected connection from "127.0.0.1:54014" (error "tls: first record does not look like a TLS handshake", ServerName "")
2020-07-10 18:27:58.070927 I | embed: rejected connection from "127.0.0.1:54020" (error "tls: first record does not look like a TLS handshake", ServerName "")
2020-07-10 18:28:00.234754 I | embed: rejected connection from "127.0.0.1:54028" (error "tls: first record does not look like a TLS handshake", ServerName "")
2020-07-10 18:28:12.298544 I | embed: rejected connection from "127.0.0.1:54080" (error "tls: first record does not look like a TLS handshake", ServerName "")
2020-07-10 18:28:13.299522 I | embed: rejected connection from "127.0.0.1:54084" (error "tls: first record does not look like a TLS handshake", ServerName "")
2020-07-10 18:28:14.595566 I | embed: rejected connection from "127.0.0.1:54092" (error "tls: first record does not look like a TLS handshake", ServerName "")

@medyagh medyagh changed the title virtualbox: etcd-minikube Error: context deadline exceeded etcdctl get: "Error: connection error: connection closed" Jul 10, 2020
@medyagh
Copy link
Member

medyagh commented Jul 10, 2020

@mahmoudshirivaramini
since the error says "connection error: connection closed"
I wonder if this is somethign we need to fix in our etcd config to pass the right IP

I think if you pass extra args to make it listen on 0.0.0.0 instead of localhost it might fix your problem
acording to the etcd docs the default is “http://localhost:2379”
https://etcd.io/docs/v3.4.0/op-guide/configuration/

–listen-client-urls
List of URLs to listen on for client traffic. This flag tells the etcd to accept incoming requests from the clients on the specified scheme://IP:port combinations. Scheme can be either http or https. Alternatively, use unix://<file-path> or unixs://<file-path> for unix sockets. If 0.0.0.0 is specified as the IP, etcd listens to the given port on all interfaces. If an IP address is given as well as a port, etcd will listen on the given port and interface. Multiple URLs may be used to specify a number of addresses and ports to listen on. The etcd will respond to requests from any of the listed addresses and ports.
default: “http://localhost:2379”
env variable: ETCD_LISTEN_CLIENT_URLS
example: “http://10.0.0.1:2379”
invalid example: “http://example.com:2379” (domain name is invalid for binding)

or it could be something about the certs

embed: rejected connection from "127.0.0.1:54004" (error "tls: first record does not look like a TLS handshake", ServerName "")

this does look like a bug this needs more debugging I would accept any help on this !

@medyagh medyagh added co/etcd startup failures where etcd may be involved kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed addon/storage-provisioner Issues relating to storage provisioner addon triage/needs-information Indicates an issue needs more information in order to work on it. kind/support Categorizes issue or PR as a support question. labels Jul 10, 2020
@mshiriv
Copy link
Author

mshiriv commented Jul 12, 2020

@medyagh I got a shell to the etcd-minikube and tried to connect etcd without declare protocol:

# ETCDCTL_API=3 etcdctl  get /  --endpoints=127.0.0.1:2379 --debug
ETCDCTL_CACERT=
ETCDCTL_CERT=
ETCDCTL_COMMAND_TIMEOUT=5s
ETCDCTL_DEBUG=true
ETCDCTL_DIAL_TIMEOUT=2s
ETCDCTL_DISCOVERY_SRV=
ETCDCTL_DISCOVERY_SRV_NAME=
ETCDCTL_ENDPOINTS=[127.0.0.1:2379]
ETCDCTL_HEX=false
ETCDCTL_INSECURE_DISCOVERY=true
ETCDCTL_INSECURE_SKIP_TLS_VERIFY=false
ETCDCTL_INSECURE_TRANSPORT=true
ETCDCTL_KEEPALIVE_TIME=2s
ETCDCTL_KEEPALIVE_TIMEOUT=6s
ETCDCTL_KEY=
ETCDCTL_PASSWORD=
ETCDCTL_USER=
ETCDCTL_WRITE_OUT=simple
WARNING: 2020/07/12 02:40:34 Adjusting keepalive ping interval to minimum period of 10s
WARNING: 2020/07/12 02:40:34 Adjusting keepalive ping interval to minimum period of 10s
INFO: 2020/07/12 02:40:34 parsed scheme: "endpoint"
INFO: 2020/07/12 02:40:34 ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
INFO: 2020/07/12 02:40:34 transport: loopyWriter.run returning. connection error: desc = "transport is closing"
INFO: 2020/07/12 02:40:35 transport: loopyWriter.run returning. connection error: desc = "transport is closing"
INFO: 2020/07/12 02:40:37 transport: loopyWriter.run returning. connection error: desc = "transport is closing"
{"level":"warn","ts":"2020-07-12T02:40:39.993Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"endpoint://client-d04dc6cc-00a8-4b0b-9d4f-79f4fdabce6d/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest connection error: connection closed"}
Error: context deadline exceeded

Here's etcd-minikube pod logs:

2020-07-12 02:48:22.660741 I | embed: rejected connection from "127.0.0.1:49374" (error "tls: first record does not look like a TLS handshake", ServerName "")
2020-07-12 02:48:23.661688 I | embed: rejected connection from "127.0.0.1:49380" (error "tls: first record does not look like a TLS handshake", ServerName "")
2020-07-12 02:48:25.223651 I | embed: rejected connection from "127.0.0.1:49390" (error "tls: first record does not look like a TLS handshake", ServerName "")

Also connecting using HTTPS protocol and skip TLS verification:

# ETCDCTL_API=3 etcdctl  get /  --endpoints=https://127.0.0.1:2379 --insecure-skip-tls-verify=true --debug
ETCDCTL_CACERT=
ETCDCTL_CERT=
ETCDCTL_COMMAND_TIMEOUT=5s
ETCDCTL_DEBUG=true
ETCDCTL_DIAL_TIMEOUT=2s
ETCDCTL_DISCOVERY_SRV=
ETCDCTL_DISCOVERY_SRV_NAME=
ETCDCTL_ENDPOINTS=[https://127.0.0.1:2379]
ETCDCTL_HEX=false
ETCDCTL_INSECURE_DISCOVERY=true
ETCDCTL_INSECURE_SKIP_TLS_VERIFY=true
ETCDCTL_INSECURE_TRANSPORT=true
ETCDCTL_KEEPALIVE_TIME=2s
ETCDCTL_KEEPALIVE_TIMEOUT=6s
ETCDCTL_KEY=
ETCDCTL_PASSWORD=
ETCDCTL_USER=
ETCDCTL_WRITE_OUT=simple
WARNING: 2020/07/12 02:44:48 Adjusting keepalive ping interval to minimum period of 10s
WARNING: 2020/07/12 02:44:48 Adjusting keepalive ping interval to minimum period of 10s
INFO: 2020/07/12 02:44:48 parsed scheme: "endpoint"
INFO: 2020/07/12 02:44:48 ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]
WARNING: 2020/07/12 02:44:48 grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority". Reconnecting...
WARNING: 2020/07/12 02:44:48 grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority". Reconnecting...
WARNING: 2020/07/12 02:44:49 grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority". Reconnecting...
WARNING: 2020/07/12 02:44:49 grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority". Reconnecting...
WARNING: 2020/07/12 02:44:51 grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority". Reconnecting...
WARNING: 2020/07/12 02:44:51 grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority". Reconnecting...
{"level":"warn","ts":"2020-07-12T02:44:53.811Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"endpoint://client-e40241d2-2f6c-4630-9182-2f89e9ada389/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest connection error: connection error: desc = \"transport: authentication handshake failed: x509: certificate signed by unknown authority\""}
Error: context deadline exceeded

Here's etcd-minikube pod logs:

$ kubectl -n kube-system logs -f etcd-minikube
2020-07-12 02:45:50.995806 I | embed: rejected connection from "127.0.0.1:48474" (error "remote error: tls: bad certificate", ServerName "")
2020-07-12 02:45:52.002671 I | embed: rejected connection from "127.0.0.1:48482" (error "remote error: tls: bad certificate", ServerName "")
2020-07-12 02:45:53.456446 I | embed: rejected connection from "127.0.0.1:48488" (error "remote error: tls: bad certificate", ServerName "")

When kubectl -n kube-system get po works, we know kube-apiserver-minikube can connect to etcd, so it seems using the certificates that kube-apiserver use to connect to ectd works!

kubectl -n kube-system get po kube-apiserver-minikube -oyaml

...
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=172.17.0.3
    - --allow-privileged=true
    - --authorization-mode=Node,RBAC
    - --client-ca-file=/var/lib/minikube/certs/ca.crt
    - --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
    - --enable-bootstrap-token-auth=true
    - --etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt
    - --etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt
    - --etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key
    - --etcd-servers=https://127.0.0.1:2379
    - --insecure-port=0
    - --kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt
    - --kubelet-client-key=/var/lib/minikube/certs/apiserver-kubelet-client.key
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - --proxy-client-cert-file=/var/lib/minikube/certs/front-proxy-client.crt
    - --proxy-client-key-file=/var/lib/minikube/certs/front-proxy-client.key
    - --requestheader-allowed-names=front-proxy-client
    - --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --secure-port=8443
    - --service-account-key-file=/var/lib/minikube/certs/sa.pub
    - --service-cluster-ip-range=10.96.0.0/12
    - --tls-cert-file=/var/lib/minikube/certs/apiserver.crt
    - --tls-private-key-file=/var/lib/minikube/certs/apiserver.key

...
kubectl -n kube-system get po etcd-minikube  -oyaml

...
spec:
  containers:
  - command:
    - etcd
    - --advertise-client-urls=https://172.17.0.3:2379
    - --cert-file=/var/lib/minikube/certs/etcd/server.crt
    - --client-cert-auth=true
    - --data-dir=/var/lib/minikube/etcd
    - --initial-advertise-peer-urls=https://172.17.0.3:2380
    - --initial-cluster=minikube=https://172.17.0.3:2380
    - --key-file=/var/lib/minikube/certs/etcd/server.key
    - --listen-client-urls=https://127.0.0.1:2379,https://172.17.0.3:2379
    - --listen-metrics-urls=http://127.0.0.1:2381
    - --listen-peer-urls=https://172.17.0.3:2380
    - --name=minikube
    - --peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt
    - --peer-client-cert-auth=true
    - --peer-key-file=/var/lib/minikube/certs/etcd/peer.key
    - --peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt
    - --snapshot-count=10000
    - --trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt
...

@priyawadhwa
Copy link

Hey @mahmoudshirivaramini did using the apiserver certs resolve your issue?

@medyagh
Copy link
Member

medyagh commented Sep 23, 2020

@mahmoudshirivaramini I wonder if that suggestion helped , did you dig any further into this ?

@mshiriv
Copy link
Author

mshiriv commented Oct 14, 2020

@medyagh I solved the issue using ETCD certificates and key that use by kube-apiserver.

kubectl -n kube-system describe po kube-apiserver-minikube
...
      --etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt
      --etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt
      --etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key

...

And I connected to minikube virtual machine via minikube ssh and installed etcd and running below command:

./etcdctl get / --prefix --keys-only --limit=10 --cacert /var/lib/minikube/certs/etcd/ca.crt --cert /var/lib/minikube/certs/apiserver-etcd-client.crt  --key /var/lib/minikube/certs/apiserver-etcd-client.key

And it worked!

@priyawadhwa
Copy link

@mahmoudshirivaramini that's great news! I'm going to go ahead and close this issue since you were able to resolve it. Feel free to reopen at any time by commenting /reopen

@ankifor
Copy link

ankifor commented Jan 7, 2024

Although this thread is closed since 4 years, I got the same issue and were able to find a way to run etcd commands through kubectl exec. Somebody may find it helpful.

First, find out, where the certificates are stored in the etcd-node (you don't need server certificates, but the peer ones).
kubectl get po etcd-minikube -n kube-system -o yaml | grep -- "--peer"
In my case

    - --peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt
    - --peer-client-cert-auth=true
    - --peer-key-file=/var/lib/minikube/certs/etcd/peer.key
    - --peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt

Now call etcdctl via kubectl with the correct options (sh -c is needed, if you want to set up the env variable ETCDCTL_API):
kubectl exec -it etcd-minikube -n kube-system -- sh -c "ETCDCTL_API=3 etcdctl get / --prefix --keys-only --cacert /var/lib/minikube/certs/etcd/ca.crt --cert /var/lib/minikube/certs/etcd/peer.crt --key /var/lib/minikube/certs/etcd/peer.key"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/etcd startup failures where etcd may be involved kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests

5 participants