Kubernetes in Google Kubernetes Engine, running Prometheus 2.1.0, scraping Kubernetes, itself and data from the nodes in the cluster.
Now with Persistent Volume on Prometheus! (so you can delete
the Prometheus pod without losing data..)
Now with Grafana 5.2.2! (You still have to configure the Datasources manually because.. I'm in the making of doing this.)
Inspired heavily by https://coreos.com/blog/monitoring-kubernetes-with-prometheus.html from August 03, 2016.
kubectl
, gcloud
and (of course..) rights to create the RBAC resources in the GKE cluster, see below:
kubectl create clusterrolebinding myname-cluster-admin-binding --clusterrole=cluster-admin [email protected]
NB: remember to use the email associated with your GCP account
- Have access to a GKE cluster with
kubectl
- Run
setup.sh
- Wait about 10-15 minutes before the loadbalancers start serving traffic
- Configure Prometheus
data source
in Grafana - ???
- Profit!
Run teardown.sh
$ gcloud container clusters create my-prometheus-cluster
$ kubectl create -f rbac/service-account-prometheus.yaml
$ kubectl create -f rbac/clusterrole-prometheus.yaml
$ kubectl create -f rbac/clusterrolebinding-prometheus.yaml
Optional: Create the Node Exporter daemonset
(if you want node-metrics like CPU-usage, etc.)
$ kubectl create -f node-exporter/daemonset-node-exporter.yaml
$ kubectl create -f prometheus/configmap-prometheus.yaml
$ kubectl create -f prometheus/pv-deployment-prometheus.yaml
"old" deployment, w/o persistent volume: (or swap the commented lines in setup.sh
, teardown.sh
)
$ kubectl create -f prometheus/deployment-prometheus.yaml
5. Create the Prometheus Service, exposing the deployment as a NodePort
since GKE ingress' require this
$ kubectl create -f networking/service-prometheus.yaml
$ kubectl create -f networking/ingress-prometheus.yaml
Use the following command to find the address Prometheus is being served on,
$ kubectl get ingress prometheus-ingress
Optional: Create the Grafana resources from the grafana/
folder, (you still have to add the datasource, for URL
use the ip from the Prometheus service
e.g. http://10.59.253.76:9090, and Access: Proxy
)
$ kubectl create -f grafana/deployment-grafana.yaml
$ kubectl create -f grafana/service-grafana.yaml
$ kubectl create -f grafana/ingress-grafana.yaml
Add patience, and use the following command to find the address Grafana is being served on,
$ kubectl get ingress grafana-ingress
A: Because I wanted to run something in kubernetes, and Prometheus seemed like a great idea, because it'd allow me to visualize stuff running in my cluster!
A: .. Well it's not ideal. But I got to set up a loadbalancer for something running in Kubernetes, and expose it to the world, and now I can quickly see what's happening in my demo cluster. I mean, set up some firewall rules and you're (sort of) golden!
Because the PV
is created automatically by a PVC
.
If you need Retain
either edit the PV
after it's been created, or create a PV
-spec.
- Add dashboards now that I've bumped to v5
- Datasources as code
Upgrade Grafana to v5 for more Config as Code!