Skip to content

Latest commit

 

History

History
130 lines (97 loc) · 6.18 KB

README.md

File metadata and controls

130 lines (97 loc) · 6.18 KB

Hetzner Cloud Kubernetes provider 🏖️

Unofficial Terraform module to provide Kubernetes for the Hetzner Cloud.

JWDobken GitHub tag (latest SemVer) license

Create a Kubernetes cluster on the Hetzner cloud, with the following features:

Getting Started

The Hetzner Cloud provider needs to be configured with a token generated from the dashboard, following to the documentation. Provide a Hetzner Cloud SSH key resource to access the cluster machines:

resource "hcloud_ssh_key" "demo_cluster" {
  name       = "demo-cluster"
  public_key = file("~/.ssh/hcloud.pub")
}

Create a Kubernetes cluster:

module "hcloud_kubernetes_cluster" {
  source          = "JWDobken/kubernetes/hcloud"
  cluster_name    = "demo-cluster"
  hcloud_token    = var.hcloud_token
  hcloud_ssh_keys = [hcloud_ssh_key.demo_cluster.id]
  master_type     = "cx11" # optional
  worker_type     = "cx21" # optional
  worker_count    = 3
}

output "kubeconfig" {
  value = module.hcloud_kubernetes_cluster.kubeconfig
}

When the cluster is deployed, the kubeconfig to reach the cluster is available from the output. There are many ways to continue, but you can store it to file:

terraform output -raw kubeconfig > demo-cluster.conf

and check the access by viewing the created cluster nodes:

$ kubectl get nodes --kubeconfig=demo-cluster.conf
NAME       STATUS   ROLES                AGE   VERSION
master-1   Ready    control-plane,master 95s   v1.21.3
worker-1   Ready    <none>               72s   v1.21.3
worker-2   Ready    <none>               73s   v1.21.3
worker-3   Ready    <none>               73s   v1.21.3

Load Balancer

The Controller Manager deploys a load balancer for any Service of type LoadBalancer, that can be configured with service.annotations. It is also possible to create the load balancer within the network using the Terraform provider:

resource "hcloud_load_balancer" "load_balancer" {
  name               = "demo-cluster-lb"
  load_balancer_type = "lb11"
  location           = "nbg1"
}

resource "hcloud_load_balancer_network" "cluster_network" {
  load_balancer_id = hcloud_load_balancer.load_balancer.id
  network_id       = module.hcloud_kubernetes_cluster.network_id
}

...and pass the name to the service.annotations. For example, deploy the ingress-controller, such as Bitnami's Nginx Ingress Controller, with the name of the load balancer as an annotation:

helm repo add bitnami https://charts.bitnami.com/bitnami
helm upgrade --install nginx-ingress \
    --version 5.6.13 \
    --set service.annotations."load-balancer\.hetzner\.cloud/name"="demo-cluster-lb" \
    bitnami/nginx-ingress-controller

Chaining other terraform modules

TLS certificate credentials form the output can be used to chain other Terraform modules, such as the Helm provider or the Kubernetes provider:

provider "helm" {
  kubernetes {
    host     = module.hcloud_kubernetes_cluster.endpoint

    cluster_ca_certificate = base64decode(module.hcloud_kubernetes_cluster.certificate_authority_data)
    client_certificate     = base64decode(module.hcloud_kubernetes_cluster.client_certificate_data)
    client_key             = base64decode(module.hcloud_kubernetes_cluster.client_key_data)
  }
}

provider "kubernetes" {
  host = module.hcloud_kubernetes_cluster.endpoint

  client_certificate     = base64decode(module.hcloud_kubernetes_cluster.client_certificate_data)
  client_key             = base64decode(module.hcloud_kubernetes_cluster.client_key_data)
  cluster_ca_certificate = base64decode(module.hcloud_kubernetes_cluster.client_certificate_data)
}

Considered features:

  • When a node is destroyed, I still need to run kubectl drain <nodename> and kubectl delete node <nodename>. Compare actual list with kubectl get nodes --output 'jsonpath={.items[*].metadata.name}'.
  • High availability for the control plane.
  • Node-pool architecture, with option to label and taint.
  • Initialize multiple master nodes.

Acknowledgements

This module came about when I was looking for an affordable Kubernetes cluster. There is an article from Christian Beneke and there are a couple of Terraform projects on which the current is heavily based:

Feel free to contribute or reach out to me.