Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integration test for disable default cni flag #113

Merged
merged 7 commits into from
Sep 9, 2024
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,12 @@ This project offers a cluster API bootstrap provider controller that manages the
### Prerequisites

* Install clusterctl following the [upstream instructions](https://cluster-api.sigs.k8s.io/user/quick-start.html#install-clusterctl)
```
```bash
maci3jka marked this conversation as resolved.
Show resolved Hide resolved
curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.1.3/clusterctl-linux-amd64 -o clusterctl
```

* Install a bootstrap Kubernetes cluster. To use MicroK8s as a bootstrap cluster:
```
```bash
sudo snap install microk8s --classic
sudo microk8s.config > ~/.kube/config
sudo microk8s enable dns
Expand All @@ -24,7 +24,7 @@ sudo microk8s enable dns

To to configure clusterctl with the two MicroK8s providers edit `~/.cluster-api/clusterctl.yaml`
and add the following:
```
```yaml
providers:
- name: "microk8s"
url: "https://github.com/canonical/cluster-api-bootstrap-provider-microk8s/releases/latest/bootstrap-components.yaml"
Expand All @@ -47,18 +47,18 @@ Alternatively, you can build the providers manually as described in the followin
### Building from source

* Install the cluster provider of your choice. Have a look at the [cluster API book](https://cluster-api.sigs.k8s.io/user/quick-start.html#initialization-for-common-providers) for your options at this step. You should deploy only the infrastructure controller leaving the bootstrap and control plane ones empty. For example assuming we want to provision a MicroK8s cluster on AWS:
```
```bash
clusterctl init --infrastructure aws --bootstrap "-" --control-plane "-"
```

* Clone the two cluster API MicroK8s specific repositories and start the controllers on two separate terminals:
```
```bash
cd $GOPATH/src/github.com/canonical/cluster-api-bootstrap-provider-microk8s/
make install
make run
```
And:
```
```bash
cd $GOPATH/src/github.com/canonical/cluster-api-control-plane-provider-microk8s/
make install
make run
Expand Down
67 changes: 37 additions & 30 deletions integration/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,36 +4,43 @@

The integration/e2e tests have the following prerequisites:

* an environment variable `CLUSTER_MANIFEST_FILE` pointing to the cluster manifest. Cluster manifests can be produced with the help of the templates found under `templates`. For example:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are those prerequisites not required anymore?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes those are not required they are hardcoded in cluster manifest dir

* an environment variable `CLUSTER_MANIFEST_FILE` pointing to the cluster manifest. Cluster manifests can be produced with the help of the templates found under [`templates`](../templates). For example:
```bash
export AWS_REGION=us-east-1
export AWS_SSH_KEY_NAME=capi
export CONTROL_PLANE_MACHINE_COUNT=3
export WORKER_MACHINE_COUNT=3
export AWS_CREATE_BASTION=false
export AWS_PUBLIC_IP=false
export AWS_CONTROL_PLANE_MACHINE_FLAVOR=t3.large
export AWS_NODE_MACHINE_FLAVOR=t3.large
export CLUSTER_NAME=test-ci-cluster
clusterctl generate cluster ${CLUSTER_NAME} --from "templates/cluster-template-aws.yaml" --kubernetes-version 1.27.0 > cluster.yaml
export CLUSTER_MANIFEST_FILE=$PWD/cluster.yaml
```
export AWS_REGION=us-east-1
export AWS_SSH_KEY_NAME=capi
export CONTROL_PLANE_MACHINE_COUNT=3
export WORKER_MACHINE_COUNT=3
export AWS_CREATE_BASTION=false
export AWS_PUBLIC_IP=false
export AWS_CONTROL_PLANE_MACHINE_FLAVOR=t3.large
export AWS_NODE_MACHINE_FLAVOR=t3.large
export CLUSTER_NAME=test-ci-cluster
clusterctl generate cluster ${CLUSTER_NAME} --from "templates/cluster-template-aws.yaml" --kubernetes-version 1.25.0 > cluster.yaml
export CLUSTER_MANIFEST_FILE=$PWD/cluster.yaml
```

> NOTE: AWS_SSH_KEY_NAME is name of ssh key in AWS that you are plan to use, if you don't have one yet refer
maci3jka marked this conversation as resolved.
Show resolved Hide resolved
> to capi on [aws prerequisites documentation](https://cluster-api-aws.sigs.k8s.io/topics/using-clusterawsadm-to-fulfill-prerequisites#ssh-key-pair)
maci3jka marked this conversation as resolved.
Show resolved Hide resolved
* Additional environment variables when testing cluster upgrades:
```
export CAPI_UPGRADE_VERSION=v1.26.0
export CAPI_UPGRADE_MD_NAME=${CLUSTER_NAME}-md-0
export CAPI_UPGRADE_MD_TYPE=machinedeployments.cluster.x-k8s.io
export CAPI_UPGRADE_CP_NAME=${CLUSTER_NAME}-control-plane
export CAPI_UPGRADE_CP_TYPE=microk8scontrolplanes.controlplane.cluster.x-k8s.io

# Change the control plane and worker machine count to desired values for in-place upgrades tests and create a new cluster manifest.
CONTROL_PLANE_MACHINE_COUNT=1
WORKER_MACHINE_COUNT=1
clusterctl generate cluster ${CLUSTER_NAME} --from "templates/cluster-template-aws.yaml" --kubernetes-version 1.25.0 > cluster-inplace.yaml
export CLUSTER_INPLACE_MANIFEST_FILE=$PWD/cluster-inplace.yaml
```bash
export CAPI_UPGRADE_VERSION=v1.28.0
export CAPI_UPGRADE_MD_NAME=${CLUSTER_NAME}-md-0
export CAPI_UPGRADE_MD_TYPE=machinedeployments.cluster.x-k8s.io
export CAPI_UPGRADE_CP_NAME=${CLUSTER_NAME}-control-plane
export CAPI_UPGRADE_CP_TYPE=microk8scontrolplanes.controlplane.cluster.x-k8s.io
# Change the control plane and worker machine count to desired values for in-place upgrades tests and create a new cluster manifest.
CONTROL_PLANE_MACHINE_COUNT=1
WORKER_MACHINE_COUNT=1
clusterctl generate cluster ${CLUSTER_NAME} --from "templates/cluster-template-aws.yaml" --kubernetes-version 1.27.0 > cluster-inplace.yaml
export CLUSTER_INPLACE_MANIFEST_FILE=$PWD/cluster-inplace.yaml
```
maci3jka marked this conversation as resolved.
Show resolved Hide resolved

```
* Additional environment variables when testing disable default CNI flag:
```bash
export DISABLE_DEFAULT_CNI=true
export POST_RUN_COMMANDS='["helm install cilium cilium/cilium --namespace kube-system --set cni.confPath=/var/snap/microk8s/current/args/cni-network --set cni.binPath=/var/snap/microk8s/current/opt/cni/bin --set daemon.runPath=/var/snap/microk8s/current/var/run/cilium --set operator.replicas=1 --set ipam.operator.clusterPoolIPv4PodCIDRList=\"10.1.0.0/16\" --set nodePort.enabled=true"]' # install Calico in place default CNI
maci3jka marked this conversation as resolved.
Show resolved Hide resolved
clusterctl generate cluster ${CLUSTER_NAME} --from "templates/cluster-template-aws.yaml" --kubernetes-version 1.27.0 > cluster_disable_default_cni.yaml
export CLUSTER_DISABLE_DEFAULT_CNI_MANIFEST_FILE=$PWD/cluster_disable_default_cni.yaml
maci3jka marked this conversation as resolved.
Show resolved Hide resolved
```

* `clusterctl` available in the PATH

Expand Down Expand Up @@ -67,10 +74,10 @@ microk8s config > ~/.kube/config

#### Initialize infrastructure provider

Visit [here](https://cluster-api.sigs.k8s.io/user/quick-start.html#initialization-for-common-providers) for a list of common infrasturture providers.
Visit [here](https://cluster-api.sigs.k8s.io/user/quick-start.html#initialization-for-common-providers) for a list of common infrastructure providers.

```bash
clusterctl init --infrastructure <infra> --bootstrap - --control-plane -
clusterctl init --infrastructure <infra> --bootstrap - --control-plane -
```

#### Build Docker images and release manifests from the checked out source code
Expand All @@ -83,7 +90,7 @@ docker push <username>/capi-bootstrap-provider-microk8s:<tag>
sed "s,docker.io/cdkbot/capi-bootstrap-provider-microk8s:latest,docker.io/<username>/capi-bootstrap-provider-microk8s:<tag>," -i bootstrap-components.yaml
```

Similarly for control-plane provider
Similarly, for control-plane provider
```bash
cd control-plane
docker build -t <username>/capi-control-plane-provider-microk8s:<tag> .
Expand Down
54 changes: 51 additions & 3 deletions integration/e2e_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ func init() {

// TestBasic waits for the target cluster to deploy and start a 30 pod deployment.
// The CLUSTER_MANIFEST_FILE environment variable should point to a manifest with the target cluster
// kubectl and clusterctl have to be avaibale in the caller's path.
// kubectl and clusterctl have to be available in the caller's path.
// kubectl should be setup so it uses the kubeconfig of the management cluster by default.
func TestBasic(t *testing.T) {
cluster_manifest_file := os.Getenv("CLUSTER_MANIFEST_FILE")
Expand Down Expand Up @@ -88,7 +88,29 @@ func TestInPlaceUpgrade(t *testing.T) {
// Important: the cluster is deleted in the Cleanup function
// which is called after all subtests are finished.
t.Logf("Deleting the cluster")
}

// TestDisableDefaultCNI deploys cluster disabled defalut CNI .
maci3jka marked this conversation as resolved.
Show resolved Hide resolved
// The CLUSTER_DISABLE_DEFAULT_CNI_MANIFEST_FILE environment variable should point to a manifest with the target cluster
maci3jka marked this conversation as resolved.
Show resolved Hide resolved
// With post actions calico will be installed.
maci3jka marked this conversation as resolved.
Show resolved Hide resolved
func TestDisableDefaultCNI(t *testing.T) {
cluster_manifest_file := os.Getenv("CLUSTER_DISABLE_DEFAULT_CNI_MANIFEST_FILE")
maci3jka marked this conversation as resolved.
Show resolved Hide resolved
if cluster_manifest_file == "" {
t.Fatalf("Environment variable CLUSTER_DISABLE_DEFAULT_CNI_MANIFEST_FILE is not set. " +
"CLUSTER_DISABLE_DEFAULT_CNI_MANIFEST_FILE is expected to hold the PATH to a cluster manifest.")
}
t.Logf("Cluster to setup is in %s", cluster_manifest_file)

setupCheck(t)
t.Cleanup(teardownCluster)

t.Run("DeployCluster", func(t *testing.T) { deployCluster(t, os.Getenv("CLUSTER_DISABLE_DEFAULT_CNI_MANIFEST_FILE")) })
t.Run("ValidateCalico", func(t *testing.T) { validateCalico(t) })
t.Run("DeployMicrobot", func(t *testing.T) { deployMicrobot(t) })
t.Run("UpgradeClusterRollout", func(t *testing.T) { upgradeCluster(t, "RollingUpgrade") })
maci3jka marked this conversation as resolved.
Show resolved Hide resolved
// Important: the cluster is deleted in the Cleanup function
// which is called after all subtests are finished.
t.Logf("Deleting the cluster")
}

// setupCheck checks that the environment is ready to run the tests.
Expand Down Expand Up @@ -149,7 +171,7 @@ func teardownCluster() {

// deployCluster deploys a cluster using the manifest in CLUSTER_MANIFEST_FILE.
func deployCluster(t testing.TB, cluster_manifest_file string) {
t.Log("Setting up the cluster")
t.Logf("Setting up the cluster using %s", cluster_manifest_file)
command := []string{"kubectl", "apply", "-f", cluster_manifest_file}
cmd := exec.Command(command[0], command[1:]...)
outputBytes, err := cmd.CombinedOutput()
Expand Down Expand Up @@ -182,7 +204,7 @@ func deployCluster(t testing.TB, cluster_manifest_file string) {
t.Fatal(err)
} else {
attempt++
t.Log("Failed to get the target's kubeconfig, retrying.")
t.Logf("Failed to get the target's kubeconfig for %s, retrying.", cluster)
time.Sleep(20 * time.Second)
}
} else {
Expand Down Expand Up @@ -307,6 +329,32 @@ func deployMicrobot(t testing.TB) {
command = []string{"kubectl", "--kubeconfig=" + KUBECONFIG, "wait", "deploy/bot", "--for=jsonpath={.status.readyReplicas}=30"}
for {
cmd = exec.Command(command[0], command[1:]...)
outputBytes, err = cmd.CombinedOutput()
if err != nil {
t.Log(string(outputBytes))
if attempt >= maxAttempts {
t.Fatal(err)
} else {
attempt++
t.Log("Retrying")
time.Sleep(10 * time.Second)
}
} else {
break
}
}
}

// validateCalico checks a deployment of calico demonset.
maci3jka marked this conversation as resolved.
Show resolved Hide resolved
func validateCalico(t testing.TB) {
t.Log("Validate Calico")
// Make sure we have as many nodes as machines
attempt := 0
maxAttempts := 60
t.Log("Waiting for the deployment to complete")
command := []string{"kubectl", "--kubeconfig=" + KUBECONFIG, "-n", "kube-system", "wait", "ds/calico-node", "--for=jsonpath={.status.numberAvailable}=6"}
for {
cmd := exec.Command(command[0], command[1:]...)
outputBytes, err := cmd.CombinedOutput()
if err != nil {
t.Log(string(outputBytes))
Expand Down
2 changes: 2 additions & 0 deletions templates/cluster-template-aws.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -108,3 +108,5 @@ spec:
initConfiguration:
riskLevel: "${SNAP_RISKLEVEL:=}"
confinement: "${SNAP_CONFINEMENT:=}"
disableDefaultCNI: ${DISABLE_DEFAULT_CNI:=false}
postRunCommands: ${POST_RUN_COMMANDS:=[]}
maci3jka marked this conversation as resolved.
Show resolved Hide resolved
Loading