Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add k3s flavor support #139

Merged
merged 2 commits into from
Feb 27, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
59 changes: 59 additions & 0 deletions docs/src/flavors/clusterclass-kubeadm.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
# Kubeadm ClusterClass
## Specification
| Control Plane | CNI | Default OS | Installs ClusterClass |
|---------------|--------|--------------|-----------------------|
| Kubeadm | Cilium | Ubuntu 22.04 | Yes |
## Prerequisites
[Quickstart](../topics/getting-started.md) completed
## Usage
### Create clusterClass and first cluster
1. Generate the ClusterClass and cluster manifests
```bash
clusterctl generate cluster test-cluster --infrastructure linode:0.0.0 --flavor clusterclass-kubeadm > test-cluster.yaml
```
2. Apply cluster manifests
```bash
kubectl apply -f test-cluster.yaml
```
### (Optional) Create a second cluster using the existing ClusterClass
1. Generate cluster manifests
```bash
clusterctl generate cluster test-cluster-2 --flavor clusterclass-kubeadm > test-cluster-2.yaml
```
```yaml
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
labels:
ccm: linode
cni: cilium
crs: test-cluster-2-crs
name: test-cluster-2
namespace: default
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.128.0/17
topology:
class: kubeadm
controlPlane:
replicas: 1
variables:
- name: region
value: us-ord
- name: controlPlaneMachineType
value: g6-standard-2
- name: workerMachineType
value: g6-standard-2
version: v1.29.1
workers:
machineDeployments:
- class: default-worker
name: md-0
replicas: 1
```
2. Apply cluster manifests
```bash
kubectl apply -f test-cluster-2.yaml
```
16 changes: 16 additions & 0 deletions docs/src/flavors/default.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Default
## Specification
| Control Plane | CNI | Default OS | Installs ClusterClass |
|---------------|--------|--------------|-----------------------|
| Kubeadm | Cilium | Ubuntu 22.04 | No |
## Prerequisites
[Quickstart](../topics/getting-started.md) completed
## Usage
1. Generate cluster yaml
```bash
clusterctl generate cluster test-cluster --infrastructure linode:0.0.0 --flavor clusterclass > test-cluster.yaml
```
2. Apply cluster yaml
```bash
kubectl apply -f test-cluster.yaml
```
20 changes: 20 additions & 0 deletions docs/src/flavors/flavors.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Flavors

In `clusterctl` the infrastructure provider authors can provide different types
of cluster templates referred to as "flavors". You can use the `--flavor` flag
to specify which flavor to use for a cluster, e.g:

```bash
clusterctl generate cluster test-cluster --flavor clusterclass
```

To use the default flavor, omit the `--flavor` flag.

See the [`clusterctl` flavors docs](https://cluster-api.sigs.k8s.io/clusterctl/commands/generate-cluster.html#flavors) for more information.


## Supported flavors

- [Default (kubeadm)](default.md)
- [Cluster Class Kubeadm](clusterclass-kubeadm.md)
- [k3s](k3s.md)
36 changes: 36 additions & 0 deletions docs/src/flavors/k3s.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# K3s
## Specification
| Control Plane | CNI | Default OS | Installs ClusterClass |
|-----------------------------|--------|--------------|-----------------------|
| [k3s](https://docs.k3s.io/) | Cilium | Ubuntu 22.04 | No |
## Prerequisites
* [Quickstart](../topics/getting-started.md) completed
* Select a [k3s kubernetes version](https://github.com/k3s-io/k3s/releases) to set for the kubernetes version
```bash
export KUBERNETES_VERSION=v1.29.1+k3s2
AshleyDumaine marked this conversation as resolved.
Show resolved Hide resolved
```
* Installed [k3s bootstrap provider](https://github.com/k3s-io/cluster-api-k3s) into your management cluster
* Add the following to `~/.cluster-api/clusterctl.yaml` for the k3s bootstrap/control plane providers
```yaml
providers:
- name: "k3s"
url: https://github.com/k3s-io/cluster-api-k3s/releases/latest/bootstrap-components.yaml
type: "BootstrapProvider"
- name: "k3s"
AshleyDumaine marked this conversation as resolved.
Show resolved Hide resolved
url: https://github.com/k3s-io/cluster-api-k3s/releases/latest/control-plane-components.yaml
type: "ControlPlaneProvider"

```
* Install the k3s provider into your management cluster
```shell
clusterctl init --bootstrap k3s --control-plane k3s
```
## Usage
1. Generate cluster yaml
```bash
clusterctl generate cluster test-cluster --infrastructure linode:0.0.0 --flavor k3s > test-k3s-cluster.yaml
```
2. Apply cluster yaml
```bash
kubectl apply -f test-k3s-cluster.yaml
```
32 changes: 26 additions & 6 deletions docs/src/topics/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,14 +17,34 @@ For more information please see the
[Linode Guide](https://www.linode.com/docs/products/tools/api/guides/manage-api-tokens/#create-an-api-token).
```

## Setting up your Linode environment
## Setting up your cluster environment variables

Once you have provisioned your PAT, save it in an environment variable:
Once you have provisioned your PAT, save it in an environment variable along with other required settings:
```bash
export LINODE_TOKEN="<LinodePAT>"
# Cluster settings
export CLUSTER_NAME=capl-cluster
export KUBERNETES_VERSION=v1.29.1

# Linode settings
export LINODE_REGION=us-ord
export LINODE_TOKEN=<your linode PAT>
export LINODE_CONTROL_PLANE_MACHINE_TYPE=g6-standard-2
export LINODE_MACHINE_TYPE=g6-standard-2
```

## Building your first cluster
## Register linode locally as an infrastructure provider
1. Generate local release files
```bash
make local-release
```
2. Add `linode` as an infrastructure provider in `~/.cluster-api/clusterctl.yaml`
```yaml
providers:
- name: linode
url: ~/cluster-api-provider-linode/infrastructure-linode/0.0.0/infrastructure-components.yaml
type: InfrastructureProvider
```

## Deploying your first cluster

Please continue from the [setting up the environment](../developers/development.md#setting-up-the-environment)
section for creating your first Kubernetes cluster on Linode using Cluster API.
Please refer to the [default flavor](../flavors/default.md) section for creating your first Kubernetes cluster on Linode using Cluster API.
12 changes: 2 additions & 10 deletions templates/flavors/README.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,8 @@
# Flavors

In `clusterctl` the infrastructure provider authors can provide different types
of cluster templates referred to as "flavors". You can use the `--flavor` flag
to specify which flavor to use for a cluster, e.g:
## [Flavor usage documentation](https://linode.github.io/cluster-api-provider-linode/flavors/flavors.html)

```shell
clusterctl generate cluster test-cluster --flavor clusterclass
```

To use the default flavor, omit the `--flavor` flag.

See the [`clusterctl` flavors docs](https://cluster-api.sigs.k8s.io/clusterctl/commands/generate-cluster.html#flavors) for more information.
## Development

This directory contains each of the flavors for CAPL. Each directory besides `base` will be used to
create a flavor by running `kustomize build` on the directory. The name of the directory will be
Expand Down
18 changes: 18 additions & 0 deletions templates/flavors/k3s/k3sConfigTemplate.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KThreesConfigTemplate
metadata:
name: ${CLUSTER_NAME}-md-0
spec:
template:
spec:
agentConfig:
nodeName: '{{ ds.meta_data.label }}'
kubeletArgs:
- "provider-id=linode://{{ ds.meta_data.id }}"
preK3sCommands:
- |
mkdir -p /etc/rancher/k3s/config.yaml.d/
echo "node-ip: $(hostname -I | grep -oE 192\.168\.[0-9]+\.[0-9]+)" >> /etc/rancher/k3s/config.yaml.d/capi-config.yaml
- sed -i '/swap/d' /etc/fstab
- swapoff -a
50 changes: 50 additions & 0 deletions templates/flavors/k3s/k3sControlPlane.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KThreesControlPlane
metadata:
name: ${CLUSTER_NAME}-control-plane
spec:
infrastructureTemplate:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: LinodeMachineTemplate
name: ${CLUSTER_NAME}-control-plane
kthreesConfigSpec:
files:
- content: |
flannel-backend: none
disable-network-policy: true
owner: root:root
path: /etc/rancher/k3s/config.yaml.d/capi-config.yaml
- contentFrom:
secret:
key: cilium.yaml
name: linode-${CLUSTER_NAME}-crs-0
owner: root:root
path: /var/lib/rancher/k3s/server/manifests/cilium.yaml
- contentFrom:
secret:
key: linode-ccm.yaml
name: linode-${CLUSTER_NAME}-crs-0
owner: root:root
path: /var/lib/rancher/k3s/server/manifests/linode-ccm.yaml
- contentFrom:
secret:
key: linode-token-region.yaml
name: linode-${CLUSTER_NAME}-crs-0
owner: root:root
path: /var/lib/rancher/k3s/server/manifests/linode-token-region.yaml
serverConfig:
disableComponents:
- servicelb
- traefik
agentConfig:
nodeName: '{{ ds.meta_data.label }}'
kubeletArgs:
- "provider-id=linode://{{ ds.meta_data.id }}"
preK3sCommands:
- |
echo "node-ip: $(hostname -I | grep -oE 192\.168\.[0-9]+\.[0-9]+)" >> /etc/rancher/k3s/config.yaml.d/capi-config.yaml
- sed -i '/swap/d' /etc/fstab
- swapoff -a
replicas: ${CONTROL_PLANE_MACHINE_COUNT}
version: ${KUBERNETES_VERSION}
22 changes: 22 additions & 0 deletions templates/flavors/k3s/kustomization.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
resources:
- ../base
- k3sControlPlane.yaml
- k3sConfigTemplate.yaml
- secret.yaml
patches:
- target:
group: cluster.x-k8s.io
version: v1beta1
kind: Cluster
patch: |-
- op: replace
path: /spec/controlPlaneRef/kind
value: KThreesControlPlane
- target:
group: cluster.x-k8s.io
version: v1beta1
kind: MachineDeployment
patch: |-
- op: replace
path: /spec/template/spec/bootstrap/configRef/kind
value: KThreesConfigTemplate
50 changes: 50 additions & 0 deletions templates/flavors/k3s/secret.yaml
AshleyDumaine marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
---
apiVersion: v1
kind: Secret
metadata:
name: linode-${CLUSTER_NAME}-crs-0
stringData:
linode-token-region.yaml: |-
kind: Secret
apiVersion: v1
metadata:
name: linode-token-region
namespace: kube-system
stringData:
apiToken: ${LINODE_TOKEN}
region: ${LINODE_REGION}
cilium.yaml: |-
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
namespace: kube-system
name: cilium
spec:
targetNamespace: kube-system
version: ${CILIUM_VERSION:=1.15.0}
chart: cilium
repo: https://helm.cilium.io/
bootstrap: true
valuesContent: |-
hubble:
relay:
enabled: true
ui:
enabled: true
linode-ccm.yaml: |-
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
namespace: kube-system
name: ccm-linode
spec:
targetNamespace: kube-system
version: ${LINODE_CCM_VERSION:=v0.3.24}
chart: ccm-linode
repo: https://linode.github.io/linode-cloud-controller-manager/
bootstrap: true
valuesContent: |-
secretRef:
name: "linode-token-region"
nodeSelector:
node-role.kubernetes.io/control-plane: "true"