Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add local-path-provisioner addon #15062

Merged
merged 4 commits into from
Sep 26, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions deploy/addons/assets.go
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,10 @@ var (
//go:embed storage-provisioner-gluster/*.tmpl
StorageProvisionerGlusterAssets embed.FS

// StorageProvisionerRancherAssets assets for storage-provisioner-rancher addon
//go:embed storage-provisioner-rancher/*.tmpl
StorageProvisionerRancherAssets embed.FS

// EfkAssets assets for efk addon
//go:embed efk/*.tmpl
EfkAssets embed.FS
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,131 @@
apiVersion: v1
kind: Namespace
metadata:
name: local-path-storage

---
apiVersion: v1
kind: ServiceAccount
metadata:
name: local-path-provisioner-service-account
namespace: local-path-storage

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: local-path-provisioner-role
rules:
- apiGroups: [ "" ]
resources: [ "nodes", "persistentvolumeclaims", "configmaps" ]
verbs: [ "get", "list", "watch" ]
- apiGroups: [ "" ]
resources: [ "endpoints", "persistentvolumes", "pods" ]
verbs: [ "*" ]
- apiGroups: [ "" ]
resources: [ "events" ]
verbs: [ "create", "patch" ]
- apiGroups: [ "storage.k8s.io" ]
resources: [ "storageclasses" ]
verbs: [ "get", "list", "watch" ]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: local-path-provisioner-bind
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: local-path-provisioner-role
subjects:
- kind: ServiceAccount
name: local-path-provisioner-service-account
namespace: local-path-storage

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: local-path-provisioner
namespace: local-path-storage
spec:
replicas: 1
selector:
matchLabels:
app: local-path-provisioner
template:
metadata:
labels:
app: local-path-provisioner
spec:
serviceAccountName: local-path-provisioner-service-account
containers:
- name: local-path-provisioner
image: {{.CustomRegistries.LocalPathProvisioner | default .ImageRepository | default .Registries.LocalPathProvisioner }}{{ .Images.LocalPathProvisioner }}
imagePullPolicy: IfNotPresent
command:
- local-path-provisioner
- --debug
- start
- --config
- /etc/config/config.json
volumeMounts:
- name: config-volume
mountPath: /etc/config/
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumes:
- name: config-volume
configMap:
name: local-path-config

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-path
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete

---
kind: ConfigMap
apiVersion: v1
metadata:
name: local-path-config
namespace: local-path-storage
data:
config.json: |-
{
"nodePathMap":[
{
"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
"paths":["/opt/local-path-provisioner"]
}
]
}
setup: |-
#!/bin/sh
set -eu
mkdir -m 0777 -p "$VOL_DIR"
teardown: |-
#!/bin/sh
set -eu
rm -rf "$VOL_DIR"
helperPod.yaml: |-
apiVersion: v1
kind: Pod
metadata:
name: helper-pod
spec:
containers:
- name: helper-pod
image: {{.CustomRegistries.Helper | default .ImageRepository | default .Registries.Helper }}{{ .Images.Helper }}
imagePullPolicy: IfNotPresent



11 changes: 10 additions & 1 deletion pkg/addons/addons_storage_classes.go
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,8 @@ func enableOrDisableStorageClasses(cc *config.ClusterConfig, name string, val st
class := defaultStorageClassProvisioner
if name == "storage-provisioner-gluster" {
class = "glusterfile"
} else if name == "storage-provisioner-rancher" {
class = "local-path"
}

api, err := machine.NewAPIClient()
Expand All @@ -62,6 +64,10 @@ func enableOrDisableStorageClasses(cc *config.ClusterConfig, name string, val st
}

if enable {
// Enable addon before marking it as default
if err = EnableOrDisableAddon(cc, name, val); err != nil {
return err
}
// Only StorageClass for 'name' should be marked as default
err = storageclass.SetDefaultStorageClass(storagev1, class)
if err != nil {
Expand All @@ -73,7 +79,10 @@ func enableOrDisableStorageClasses(cc *config.ClusterConfig, name string, val st
if err != nil {
return errors.Wrapf(err, "Error disabling %s as the default storage class", class)
}
if err = EnableOrDisableAddon(cc, name, val); err != nil {
return err
}
}

return EnableOrDisableAddon(cc, name, val)
return nil
}
5 changes: 5 additions & 0 deletions pkg/addons/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -166,6 +166,11 @@ var Addons = []*Addon{
set: SetBool,
callbacks: []setFn{enableOrDisableStorageClasses},
},
{
name: "storage-provisioner-rancher",
set: SetBool,
callbacks: []setFn{enableOrDisableStorageClasses},
},
{
name: "metallb",
set: SetBool,
Expand Down
13 changes: 13 additions & 0 deletions pkg/minikube/assets/addons.go
Original file line number Diff line number Diff line change
Expand Up @@ -208,6 +208,19 @@ var Addons = map[string]*Addon{
"GlusterfsServer": "docker.io",
"GlusterfileProvisioner": "docker.io",
}),
"storage-provisioner-rancher": NewAddon([]*BinAsset{
MustBinAsset(addons.StorageProvisionerRancherAssets,
"storage-provisioner-rancher/storage-provisioner-rancher.yaml.tmpl",
vmpath.GuestAddonsDir,
"storage-provisioner-rancher.yaml",
"0640"),
}, false, "storage-provisioner-rancher", "3rd party (Rancher)", "", "", map[string]string{
"LocalPathProvisioner": "rancher/local-path-provisioner:v0.0.22@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246",
"Helper": "busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79",
}, map[string]string{
"LocalPathProvisioner": "docker.io",
"Helper": "docker.io",
}),
"efk": NewAddon([]*BinAsset{
MustBinAsset(addons.EfkAssets,
"efk/elasticsearch-rc.yaml.tmpl",
Expand Down
103 changes: 103 additions & 0 deletions site/content/en/docs/tutorials/local_path_provisioner.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
---
title: "Using Local Path Provisioner"
linkTitle: "Using Local Path Provisioner"
weight: 1
date: 2022-10-05
description: >
Using Local Path Provisioner
---

## Overview

[Local Path Provisioner](https://github.com/rancher/local-path-provisioner), provides a way for the Kubernetes users to utilize the local storage in each node. It supports multi-node setups. This tutorial will show you how to setup local-path-prvisioner on two node minikube cluster.

## Prerequisites

- Minikube version higher than v1.27.0
- kubectl

## Tutorial

- Start a cluster with 2 nodes:

```shell
$ minikube start -n 2
```

- Enable `storage-provisioner-rancher` addon:

```
$ minikube addons enable storage-provisioner-rancher
```

- You should be able to see Pod in the `local-path-storage` namespace:

```
$ kubectl get pods -n local-path-storage
NAME READY STATUS RESTARTS AGE
local-path-provisioner-7f58b4649-hcbk9 1/1 Running 0 38s
```

- The `local-path` StorageClass should be marked as `default`:

```
$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 107s
standard k8s.io/minikube-hostpath Delete Immediate false 4m27s
```

- The following `yaml` creates PVC and Pod that creates file with content on second node (minikube-m02):

```
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 64Mi
---
apiVersion: v1
kind: Pod
metadata:
name: test-local-path
spec:
restartPolicy: OnFailure
nodeSelector:
"kubernetes.io/hostname": "minikube-m02"
containers:
- name: busybox
image: busybox:stable
command: ["sh", "-c", "echo 'local-path-provisioner' > /test/file1"]
volumeMounts:
- name: data
mountPath: /test
volumes:
- name: data
persistentVolumeClaim:
claimName: test-pvc
```

```
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-pvc Bound pvc-f07e253b-fea7-433a-b0ac-1bcea3f77076 64Mi RWO local-path 5m19s
```

```
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test-local-path 0/1 Completed 0 5m19s 10.244.1.5 minikube-m02 <none> <none>
```

- On the second node we are able to see created file with content `local-path-provisioner`:

```
$ minikube ssh -n minikube-m02 "cat /opt/local-path-provisioner/pvc-f07e253b-fea7-433a-b0ac-1bcea3f77076_default_test-pvc/file1"
local-path-provisioner
```
Loading