diff --git a/mkdocs/docs/experiments/categories/gcp/gcp-vm-disk-loss-by-label.md b/mkdocs/docs/experiments/categories/gcp/gcp-vm-disk-loss-by-label.md
new file mode 100644
index 00000000000..791a9f70da1
--- /dev/null
+++ b/mkdocs/docs/experiments/categories/gcp/gcp-vm-disk-loss-by-label.md
@@ -0,0 +1,260 @@
+## Introduction
+
+- It causes chaos to disrupt the state of GCP persistent disk volume filtered using a label by detaching it from its VM instance for a certain chaos duration.
+
+!!! tip "Scenario: detach the gcp disk"
+ ![GCP VM Disk Loss By Label](../../images/gcp-vm-disk-loss.png)
+
+## Uses
+
+??? info "View the uses of the experiment"
+ coming soon
+
+## Prerequisites
+
+??? info "Verify the prerequisites"
+ - Ensure that Kubernetes Version > 1.17
+ - Ensure that the Chaos Operator is running by executing kubectl get pods
in operator namespace (typically, litmus
).If not, install from here
+ - Ensure that the gcp-vm-disk-loss-by-label
experiment resource is available in the cluster by executing kubectl get chaosexperiments
in the desired namespace. If not, install from here
+ - Ensure that your service account has an editor access or owner access for the GCP project.
+ - Ensure that the target disk volume is not a boot disk of any VM instance.
+ - Ensure to create a Kubernetes secret having the GCP service account credentials in the default namespace. A sample secret file looks like:
+
+ ```yaml
+ apiVersion: v1
+ kind: Secret
+ metadata:
+ name: cloud-secret
+ type: Opaque
+ stringData:
+ type:
+ project_id:
+ private_key_id:
+ private_key:
+ client_email:
+ client_id:
+ auth_uri:
+ token_uri:
+ auth_provider_x509_cert_url:
+ client_x509_cert_url:
+ ```
+
+## Default Validations
+
+??? info "View the default validations"
+ - All the disk volumes having the target label are attached to their respective instances
+
+## Minimal RBAC configuration example (optional)
+
+!!! tip "NOTE"
+ If you are using this experiment as part of a litmus workflow scheduled constructed & executed from chaos-center, then you may be making use of the [litmus-admin](https://litmuschaos.github.io/litmus/litmus-admin-rbac.yaml) RBAC, which is pre installed in the cluster as part of the agent setup.
+
+ ??? note "View the Minimal RBAC permissions"
+
+ [embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/gcp/gcp-vm-disk-loss-by-label/rbac.yaml yaml)
+ ```yaml
+ ---
+ apiVersion: v1
+ kind: ServiceAccount
+ metadata:
+ name: gcp-vm-disk-loss-by-label-sa
+ namespace: default
+ labels:
+ name: gcp-vm-disk-loss-by-label-sa
+ app.kubernetes.io/part-of: litmus
+ ---
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: ClusterRole
+ metadata:
+ name: gcp-vm-disk-loss-by-label-sa
+ labels:
+ name: gcp-vm-disk-loss-by-label-sa
+ app.kubernetes.io/part-of: litmus
+ rules:
+ # Create and monitor the experiment & helper pods
+ - apiGroups: [""]
+ resources: ["pods"]
+ verbs: ["create","delete","get","list","patch","update", "deletecollection"]
+ # Performs CRUD operations on the events inside chaosengine and chaosresult
+ - apiGroups: [""]
+ resources: ["events"]
+ verbs: ["create","get","list","patch","update"]
+ # Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
+ - apiGroups: [""]
+ resources: ["secrets","configmaps"]
+ verbs: ["get","list",]
+ # Track and get the runner, experiment, and helper pods log
+ - apiGroups: [""]
+ resources: ["pods/log"]
+ verbs: ["get","list","watch"]
+ # for configuring and monitor the experiment job by the chaos-runner pod
+ - apiGroups: ["batch"]
+ resources: ["jobs"]
+ verbs: ["create","list","get","delete","deletecollection"]
+ # for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
+ - apiGroups: ["litmuschaos.io"]
+ resources: ["chaosengines","chaosexperiments","chaosresults"]
+ verbs: ["create","list","get","patch","update","delete"]
+ ---
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: ClusterRoleBinding
+ metadata:
+ name: gcp-vm-disk-loss-by-label-sa
+ labels:
+ name: gcp-vm-disk-loss-by-label-sa
+ app.kubernetes.io/part-of: litmus
+ roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: gcp-vm-disk-loss-by-label-sa
+ subjects:
+ - kind: ServiceAccount
+ name: gcp-vm-disk-loss-by-label-sa
+ namespace: default
+ ```
+ Use this sample RBAC manifest to create a chaosServiceAccount in the desired (app) namespace. This example consists of the minimum necessary role permissions to execute the experiment.
+
+## Experiment tunables
+
+??? info "check the experiment tunables"
+
Mandatory Fields
+
+
+
+ Variables |
+ Description |
+ Notes |
+
+
+ GCP_PROJECT_ID |
+ The ID of the GCP Project of which the disk volumes are a part of |
+ All the target disk volumes should belong to a single GCP Project |
+
+
+ DISK_VOLUME_LABEL |
+ Label of the targeted non-boot persistent disk volume |
+ The DISK_VOLUME_LABEL should be provided as key:value or key if the corresponding value is empty ex: disk:target-disk |
+
+
+ DISK_ZONES |
+ The zone of target disk volumes |
+ Only one zone can be provided i.e. all target disks should lie in the same zone |
+
+
+
+ Optional Fields
+
+
+
+ Variables |
+ Description |
+ Notes |
+
+
+ TOTAL_CHAOS_DURATION |
+ The total time duration for chaos insertion (sec) |
+ Defaults to 30s |
+
+
+ CHAOS_INTERVAL |
+ The interval (in sec) between the successive chaos iterations (sec) |
+ Defaults to 30s |
+
+
+ DISK_AFFECTED_PERC |
+ The percentage of total disks filtered using the label to target |
+ Defaults to 0 (corresponds to 1 disk), provide numeric value only |
+
+
+ SEQUENCE |
+ It defines sequence of chaos execution for multiple disks |
+ Default value: parallel. Supported: serial, parallel |
+
+
+ RAMP_TIME |
+ Period to wait before and after injection of chaos in sec |
+ |
+
+
+
+## Experiment Examples
+
+### Common Experiment Tunables
+
+Refer the [common attributes](../common/common-tunables-for-all-experiments.md) to tune the common tunables for all the experiments.
+
+### Detach Volumes By Label
+
+It contains the label of disk volumes to be subjected to disk loss chaos. It will detach all the disks with the label `DISK_VOLUME_LABEL` in zone `DISK_ZONES` within the `GCP_PROJECT_ID` project. It re-attaches the disk volume after waiting for the specified `TOTAL_CHAOS_DURATION` duration.
+
+`NOTE:` The `DISK_VOLUME_LABEL` accepts only one label and `DISK_ZONES` also accepts only one zone name. Therefore, all the disks must lie in the same zone.
+
+Use the following example to tune this:
+
+[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/gcp/gcp-vm-disk-loss-by-label/gcp-disk-loss.yaml yaml)
+```yaml
+## details of the gcp disk
+apiVersion: litmuschaos.io/v1alpha1
+kind: ChaosEngine
+metadata:
+ name: engine-nginx
+spec:
+ engineState: "active"
+ annotationCheck: "false"
+ chaosServiceAccount: gcp-vm-disk-loss-by-label-sa
+ experiments:
+ - name: gcp-vm-disk-loss-by-label
+ spec:
+ components:
+ env:
+ - name: DISK_VOLUME_LABEL
+ value: 'disk:target-disk'
+
+ - name: DISK_ZONES
+ value: 'us-east1-b'
+
+ - name: GCP_PROJECT_ID
+ value: 'my-project-4513'
+
+ - name: TOTAL_CHAOS_DURATION
+ VALUE: '60'
+```
+
+### Mutiple Iterations Of Chaos
+
+The multiple iterations of chaos can be tuned via setting `CHAOS_INTERVAL` ENV. Which defines the delay between each iteration of chaos.
+
+Use the following example to tune this:
+
+[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/gcp/gcp-vm-disk-loss-by-label/chaos-interval.yaml yaml)
+```yaml
+# defines delay between each successive iteration of the chaos
+apiVersion: litmuschaos.io/v1alpha1
+kind: ChaosEngine
+metadata:
+ name: engine-nginx
+spec:
+ engineState: "active"
+ annotationCheck: "false"
+ chaosServiceAccount: gcp-vm-disk-loss-by-label-sa
+ experiments:
+ - name: gcp-vm-disk-loss-by-label
+ spec:
+ components:
+ env:
+ - name: CHAOS_INTERVAL
+ value: '15'
+
+ - name: TOTAL_CHAOS_DURATION
+ VALUE: '60'
+
+ - name: DISK_VOLUME_LABEL
+ value: 'disk:target-disk'
+
+ - name: DISK_ZONES
+ value: 'us-east1-b'
+
+ - name: GCP_PROJECT_ID
+ value: 'my-project-4513'
+
+```
diff --git a/mkdocs/docs/experiments/categories/gcp/gcp-vm-disk-loss-by-label/chaos-interval.yaml b/mkdocs/docs/experiments/categories/gcp/gcp-vm-disk-loss-by-label/chaos-interval.yaml
new file mode 100644
index 00000000000..b0f0c205509
--- /dev/null
+++ b/mkdocs/docs/experiments/categories/gcp/gcp-vm-disk-loss-by-label/chaos-interval.yaml
@@ -0,0 +1,27 @@
+apiVersion: litmuschaos.io/v1alpha1
+kind: ChaosEngine
+metadata:
+ name: engine-nginx
+spec:
+ engineState: "active"
+ annotationCheck: "false"
+ chaosServiceAccount: gcp-vm-disk-loss-by-label-sa
+ experiments:
+ - name: gcp-vm-disk-loss-by-label
+ spec:
+ components:
+ env:
+ - name: CHAOS_INTERVAL
+ value: '15'
+
+ - name: TOTAL_CHAOS_DURATION
+ VALUE: '60'
+
+ - name: DISK_VOLUME_LABEL
+ value: 'disk:target-disk'
+
+ - name: DISK_ZONES
+ value: 'us-east1-b'
+
+ - name: GCP_PROJECT_ID
+ value: 'my-project-4513'
diff --git a/mkdocs/docs/experiments/categories/gcp/gcp-vm-disk-loss-by-label/gcp-disk-loss.yaml b/mkdocs/docs/experiments/categories/gcp/gcp-vm-disk-loss-by-label/gcp-disk-loss.yaml
new file mode 100644
index 00000000000..7bada1c5e4f
--- /dev/null
+++ b/mkdocs/docs/experiments/categories/gcp/gcp-vm-disk-loss-by-label/gcp-disk-loss.yaml
@@ -0,0 +1,24 @@
+apiVersion: litmuschaos.io/v1alpha1
+kind: ChaosEngine
+metadata:
+ name: engine-nginx
+spec:
+ engineState: "active"
+ annotationCheck: "false"
+ chaosServiceAccount: gcp-vm-disk-loss-by-label-sa
+ experiments:
+ - name: gcp-vm-disk-loss-by-label
+ spec:
+ components:
+ env:
+ - name: DISK_VOLUME_LABEL
+ value: 'disk:target-disk'
+
+ - name: DISK_ZONES
+ value: 'us-east1-b'
+
+ - name: GCP_PROJECT_ID
+ value: 'my-project-4513'
+
+ - name: TOTAL_CHAOS_DURATION
+ VALUE: '60'
diff --git a/mkdocs/docs/experiments/categories/gcp/gcp-vm-disk-loss.md b/mkdocs/docs/experiments/categories/gcp/gcp-vm-disk-loss.md
index e96db68ff99..cc88e55e9fb 100644
--- a/mkdocs/docs/experiments/categories/gcp/gcp-vm-disk-loss.md
+++ b/mkdocs/docs/experiments/categories/gcp/gcp-vm-disk-loss.md
@@ -17,7 +17,7 @@
- Ensure that the Litmus Chaos Operator is running by executing kubectl get pods
in operator namespace (typically, litmus
).If not, install from here
- Ensure that the gcp-vm-disk-loss
experiment resource is available in the cluster by executing kubectl get chaosexperiments
in the desired namespace. If not, install from here
- Ensure that your service account has an editor access or owner access for the GCP project.
- - Ensure the target disk volume to be detached should not be the root volume its instance.
+ - Ensure that the target disk volume is not a boot disk of any VM instance.
- Ensure to create a Kubernetes secret having the GCP service account credentials in the default namespace. A sample secret file looks like:
```yaml
@@ -171,7 +171,7 @@
SEQUENCE |
- It defines sequence of chaos execution for multiple instance |
+ It defines sequence of chaos execution for multiple disks |
Default value: parallel. Supported: serial, parallel |
diff --git a/mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop-by-label.md b/mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop-by-label.md
new file mode 100644
index 00000000000..bb3602b7be5
--- /dev/null
+++ b/mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop-by-label.md
@@ -0,0 +1,308 @@
+## Introduction
+
+- It causes power-off of GCP VM instances filtered by a label before bringing it back to the running state after the specified chaos duration.
+- It helps to check the performance of the application/process running on the VM instance.
+- When the `MANAGED_INSTANCE_GROUP` is `enable` then the experiment will not try to start the instances post chaos, instead it will check the addition of new instances to the instance group.
+
+!!! tip "Scenario: stop the gcp vm"
+ ![GCP VM Instance Stop By Label](../../images/gcp-vm-instance-stop.png)
+
+## Uses
+
+??? info "View the uses of the experiment"
+ coming soon
+
+## Prerequisites
+
+??? info "Verify the prerequisites"
+ - Ensure that Kubernetes Version > 1.16
+ - Ensure that the Litmus Chaos Operator is running by executing kubectl get pods
in operator namespace (typically, litmus
).If not, install from here
+ - Ensure that the gcp-vm-instance-stop-by-label
experiment resource is available in the cluster by executing kubectl get chaosexperiments
in the desired namespace. If not, install from here
+ - Ensure that you have sufficient GCP permissions to stop and start the GCP VM instances.
+ - Ensure to create a Kubernetes secret having the GCP service account credentials in the default namespace. A sample secret file looks like:
+
+ ```yaml
+ apiVersion: v1
+ kind: Secret
+ metadata:
+ name: cloud-secret
+ type: Opaque
+ stringData:
+ type:
+ project_id:
+ private_key_id:
+ private_key:
+ client_email:
+ client_id:
+ auth_uri:
+ token_uri:
+ auth_provider_x509_cert_url:
+ client_x509_cert_url:
+ ```
+
+## Default Validations
+
+??? info "View the default validations"
+ - All the VM instances having the target label are in a healthy state
+
+## Minimal RBAC configuration example (optional)
+
+!!! tip "NOTE"
+ If you are using this experiment as part of a litmus workflow scheduled constructed & executed from chaos-center, then you may be making use of the [litmus-admin](https://litmuschaos.github.io/litmus/litmus-admin-rbac.yaml) RBAC, which is pre installed in the cluster as part of the agent setup.
+
+ ??? note "View the Minimal RBAC permissions"
+
+ [embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/gcp/gcp-vm-instance-stop-by-label/rbac.yaml yaml)
+ ```yaml
+ ---
+ apiVersion: v1
+ kind: ServiceAccount
+ metadata:
+ name: gcp-vm-instance-stop-by-label-sa
+ namespace: default
+ labels:
+ name: gcp-vm-instance-stop-by-label-sa
+ app.kubernetes.io/part-of: litmus
+ ---
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: ClusterRole
+ metadata:
+ name: gcp-vm-instance-stop-by-label-sa
+ labels:
+ name: gcp-vm-instance-stop-by-label-sa
+ app.kubernetes.io/part-of: litmus
+ rules:
+ # Create and monitor the experiment & helper pods
+ - apiGroups: [""]
+ resources: ["pods"]
+ verbs: ["create","delete","get","list","patch","update", "deletecollection"]
+ # Performs CRUD operations on the events inside chaosengine and chaosresult
+ - apiGroups: [""]
+ resources: ["events"]
+ verbs: ["create","get","list","patch","update"]
+ # Fetch configmaps & secrets details and mount it to the experiment pod (if specified)
+ - apiGroups: [""]
+ resources: ["secrets","configmaps"]
+ verbs: ["get","list",]
+ # Track and get the runner, experiment, and helper pods log
+ - apiGroups: [""]
+ resources: ["pods/log"]
+ verbs: ["get","list","watch"]
+ # for configuring and monitor the experiment job by the chaos-runner pod
+ - apiGroups: ["batch"]
+ resources: ["jobs"]
+ verbs: ["create","list","get","delete","deletecollection"]
+ # for creation, status polling and deletion of litmus chaos resources used within a chaos workflow
+ - apiGroups: ["litmuschaos.io"]
+ resources: ["chaosengines","chaosexperiments","chaosresults"]
+ verbs: ["create","list","get","patch","update","delete"]
+ # for experiment to perform node status checks
+ - apiGroups: [""]
+ resources: ["nodes"]
+ verbs: ["get","list"]
+ ---
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: ClusterRoleBinding
+ metadata:
+ name: gcp-vm-instance-stop-by-label-sa
+ labels:
+ name: gcp-vm-instance-stop-by-label-sa
+ app.kubernetes.io/part-of: litmus
+ roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: gcp-vm-instance-stop-by-label-sa
+ subjects:
+ - kind: ServiceAccount
+ name: gcp-vm-instance-stop-by-label-sa
+ namespace: default
+ ```
+ Use this sample RBAC manifest to create a chaosServiceAccount in the desired (app) namespace. This example consists of the minimum necessary role permissions to execute the experiment.
+
+## Experiment tunables
+
+??? info "check the experiment tunables"
+ Mandatory Fields
+
+
+
+ Variables |
+ Description |
+ Notes |
+
+
+ GCP_PROJECT_ID |
+ GCP project ID to which the VM instances belong |
+ All the VM instances must belong to a single GCP project |
+
+
+ INSTANCE_LABEL |
+ Name of target VM instances |
+ The INSTANCE_LABEL should be provided as key:value or key if the corresponding value is empty ex: vm:target-vm |
+
+
+ INSTANCE_ZONES |
+ The zone of the target VM instances |
+ Only one zone can be provided i.e. all target instances should lie in the same zone |
+
+
+
+ Optional Fields
+
+
+
+ Variables |
+ Description |
+ Notes |
+
+
+ TOTAL_CHAOS_DURATION |
+ The total time duration for chaos insertion (sec) |
+ Defaults to 30s |
+
+
+ CHAOS_INTERVAL |
+ The interval (in sec) between successive instance termination |
+ Defaults to 30s |
+
+
+ MANAGED_INSTANCE_GROUP |
+ Set to enable if the target instance is the part of a managed instance group |
+ Defaults to disable |
+
+
+ INSTANCE_AFFECTED_PERC |
+ The percentage of total VMs filtered using the label to target |
+ Defaults to 0 (corresponds to 1 instance), provide numeric value only |
+
+
+ SEQUENCE |
+ It defines sequence of chaos execution for multiple instance |
+ Default value: parallel. Supported: serial, parallel |
+
+
+ RAMP_TIME |
+ Period to wait before and after injection of chaos in sec |
+ |
+
+
+
+## Experiment Examples
+
+### Common Experiment Tunables
+
+Refer the [common attributes](../common/common-tunables-for-all-experiments.md) to tune the common tunables for all the experiments.
+
+### Target GCP Instances
+
+It will stop all the instances with filtered by the label `INSTANCE_LABEL` and corresponding `INSTANCE_ZONES` zone in `GCP_PROJECT_ID` project.
+
+`NOTE:` The `INSTANCE_LABEL` accepts only one label and `INSTANCE_ZONES` also accepts only one zone name. Therefore, all the instances must lie in the same zone.
+
+Use the following example to tune this:
+
+[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop-by-label/gcp-instance.yaml yaml)
+```yaml
+## details of the gcp instance
+apiVersion: litmuschaos.io/v1alpha1
+kind: ChaosEngine
+metadata:
+ name: engine-nginx
+spec:
+ engineState: "active"
+ annotationCheck: "false"
+ chaosServiceAccount: gcp-vm-instance-stop-by-label-sa
+ experiments:
+ - name: gcp-vm-instance-stop-by-label
+ spec:
+ components:
+ env:
+ - name: INSTANCE_LABEL
+ value: 'vm:target-vm'
+
+ - name: INSTANCE_ZONES
+ value: 'us-east1-b'
+
+ - name: GCP_PROJECT_ID
+ value: 'my-project-4513'
+
+ - name: TOTAL_CHAOS_DURATION
+ VALUE: '60'
+```
+
+### Manged Instance Group
+
+If vm instances belong to a managed instance group then provide the `MANAGED_INSTANCE_GROUP` as `enable` else provided it as `disable`, which is the default value.
+
+Use the following example to tune this:
+
+[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop-by-label/managed-instance-group.yaml yaml)
+```yaml
+## scale up and down to maintain the available instance counts
+apiVersion: litmuschaos.io/v1alpha1
+kind: ChaosEngine
+metadata:
+ name: engine-nginx
+spec:
+ engineState: "active"
+ annotationCheck: "false"
+ chaosServiceAccount: gcp-vm-instance-stop-by-label-sa
+ experiments:
+ - name: gcp-vm-instance-stop-by-label
+ spec:
+ components:
+ env:
+ - name: MANAGED_INSTANCE_GROUP
+ value: 'enable'
+
+ - name: INSTANCE_LABEL
+ value: 'vm:target-vm'
+
+ - name: INSTANCE_ZONES
+ value: 'us-east1-b'
+
+ - name: GCP_PROJECT_ID
+ value: 'my-project-4513'
+
+ - name: TOTAL_CHAOS_DURATION
+ VALUE: '60'
+```
+
+### Mutiple Iterations Of Chaos
+
+The multiple iterations of chaos can be tuned via setting `CHAOS_INTERVAL` ENV. Which defines the delay between each iteration of chaos.
+
+Use the following example to tune this:
+
+[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop-by-label/chaos-interval.yaml yaml)
+```yaml
+# defines delay between each successive iteration of the chaos
+apiVersion: litmuschaos.io/v1alpha1
+kind: ChaosEngine
+metadata:
+ name: engine-nginx
+spec:
+ engineState: "active"
+ annotationCheck: "false"
+ chaosServiceAccount: gcp-vm-instance-stop-by-label-sa
+ experiments:
+ - name: gcp-vm-instance-stop-by-label
+ spec:
+ components:
+ env:
+ - name: CHAOS_INTERVAL
+ value: '15'
+
+ - name: TOTAL_CHAOS_DURATION
+ VALUE: '60'
+
+ - name: INSTANCE_LABEL
+ value: 'vm:target-vm'
+
+ - name: INSTANCE_ZONES
+ value: 'us-east1-b'
+
+ - name: GCP_PROJECT_ID
+ value: 'my-project-4513'
+
+```
diff --git a/mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop-by-label/chaos-interval.yaml b/mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop-by-label/chaos-interval.yaml
new file mode 100644
index 00000000000..e44679b6337
--- /dev/null
+++ b/mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop-by-label/chaos-interval.yaml
@@ -0,0 +1,27 @@
+apiVersion: litmuschaos.io/v1alpha1
+kind: ChaosEngine
+metadata:
+ name: engine-nginx
+spec:
+ engineState: "active"
+ annotationCheck: "false"
+ chaosServiceAccount: gcp-vm-instance-stop-by-label-sa
+ experiments:
+ - name: gcp-vm-instance-stop-by-label
+ spec:
+ components:
+ env:
+ - name: CHAOS_INTERVAL
+ value: '15'
+
+ - name: TOTAL_CHAOS_DURATION
+ VALUE: '60'
+
+ - name: INSTANCE_LABEL
+ value: 'vm:target-vm'
+
+ - name: INSTANCE_ZONES
+ value: 'us-east1-b'
+
+ - name: GCP_PROJECT_ID
+ value: 'my-project-4513'
\ No newline at end of file
diff --git a/mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop-by-label/gcp-instance.yaml b/mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop-by-label/gcp-instance.yaml
new file mode 100644
index 00000000000..8a9f2ee58d8
--- /dev/null
+++ b/mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop-by-label/gcp-instance.yaml
@@ -0,0 +1,24 @@
+apiVersion: litmuschaos.io/v1alpha1
+kind: ChaosEngine
+metadata:
+ name: engine-nginx
+spec:
+ engineState: "active"
+ annotationCheck: "false"
+ chaosServiceAccount: gcp-vm-instance-stop-by-label-sa
+ experiments:
+ - name: gcp-vm-instance-stop-by-label
+ spec:
+ components:
+ env:
+ - name: INSTANCE_LABEL
+ value: 'vm:target-vm'
+
+ - name: INSTANCE_ZONES
+ value: 'us-east1-b'
+
+ - name: GCP_PROJECT_ID
+ value: 'my-project-4513'
+
+ - name: TOTAL_CHAOS_DURATION
+ VALUE: '60'
\ No newline at end of file
diff --git a/mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop-by-label/managed-instance-group.yaml b/mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop-by-label/managed-instance-group.yaml
new file mode 100644
index 00000000000..2cd03b29cd4
--- /dev/null
+++ b/mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop-by-label/managed-instance-group.yaml
@@ -0,0 +1,27 @@
+apiVersion: litmuschaos.io/v1alpha1
+kind: ChaosEngine
+metadata:
+ name: engine-nginx
+spec:
+ engineState: "active"
+ annotationCheck: "false"
+ chaosServiceAccount: gcp-vm-instance-stop-by-label-sa
+ experiments:
+ - name: gcp-vm-instance-stop-by-label
+ spec:
+ components:
+ env:
+ - name: MANAGED_INSTANCE_GROUP
+ value: 'enable'
+
+ - name: INSTANCE_LABEL
+ value: 'vm:target-vm'
+
+ - name: INSTANCE_ZONES
+ value: 'us-east1-b'
+
+ - name: GCP_PROJECT_ID
+ value: 'my-project-4513'
+
+ - name: TOTAL_CHAOS_DURATION
+ VALUE: '60'
\ No newline at end of file
diff --git a/mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop.md b/mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop.md
index 2c8090efa62..894ac5b5eed 100644
--- a/mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop.md
+++ b/mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop.md
@@ -2,7 +2,7 @@
- It causes power-off of a GCP VM instance by instance name or list of instance names before bringing it back to the running state after the specified chaos duration.
- It helps to check the performance of the application/process running on the VM instance.
-- When the `AUTO_SCALING_GROUP` is enable then the experiment will not try to start the instance post chaos, instead it will check the addition of the new node instances to the cluster.
+- When the `MANAGED_INSTANCE_GROUP` is `enable` then the experiment will not try to start the instances post chaos, instead it will check the addition of new instances to the instance group.
!!! tip "Scenario: stop the gcp vm"
![GCP VM Instance Stop](../../images/gcp-vm-instance-stop.png)
@@ -170,8 +170,8 @@
Defaults to 30s |
- AUTO_SCALING_GROUP |
- Set to enable if the target instance is the part of a auto-scaling group |
+ MANAGED_INSTANCE_GROUP |
+ Set to enable if the target instance is the part of a managed instance group |
Defaults to disable |
@@ -230,13 +230,13 @@ spec:
VALUE: '60'
```
-### Autoscaling NodeGroup
+### Managed Instance Group
-If vm instances belong to the autoscaling group then provide the `AUTO_SCALING_GROUP` as `enable` else provided it as `disable`. The default value of `AUTO_SCALING_GROUP` is `disable`.
+If vm instances belong to a managed instance group then provide the `MANAGED_INSTANCE_GROUP` as `enable` else provided it as `disable`, which is the default value.
Use the following example to tune this:
-[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop/auto-scaling.yaml yaml)
+[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/litmus/master/mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop/managed-instance-group.yaml yaml)
```yaml
## scale up and down to maintain the available instance counts
apiVersion: litmuschaos.io/v1alpha1
@@ -252,9 +252,9 @@ spec:
spec:
components:
env:
- # tells if instances are part of autoscaling group
+ # tells if instances are part of managed instance group
# supports: enable, disable. default: disable
- - name: AUTO_SCALING_GROUP
+ - name: MANAGED_INSTANCE_GROUP
value: 'enable'
# comma separated list of vm instance names
- name: VM_INSTANCE_NAMES
diff --git a/mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop/auto-scaling.yaml b/mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop/managed-instance-group.yaml
similarity index 88%
rename from mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop/auto-scaling.yaml
rename to mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop/managed-instance-group.yaml
index 4d451273ee2..8ef26c92b4c 100644
--- a/mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop/auto-scaling.yaml
+++ b/mkdocs/docs/experiments/categories/gcp/gcp-vm-instance-stop/managed-instance-group.yaml
@@ -12,9 +12,9 @@ spec:
spec:
components:
env:
- # tells if instances are part of autoscaling group
+ # tells if instances are part of managed instance group
# supports: enable, disable. default: disable
- - name: AUTO_SCALING_GROUP
+ - name: MANAGED_INSTANCE_GROUP
value: 'enable'
# comma separated list of vm instance names
- name: VM_INSTANCE_NAMES
@@ -27,4 +27,4 @@ spec:
- name: GCP_PROJECT_ID
value: 'project-id'
- name: TOTAL_CHAOS_DURATION
- VALUE: '60'
+ VALUE: '60'
\ No newline at end of file
diff --git a/mkdocs/docs/experiments/images/gcp-vm-disk-loss.png b/mkdocs/docs/experiments/images/gcp-vm-disk-loss.png
index 84357d46a12..3eabf02b38e 100644
Binary files a/mkdocs/docs/experiments/images/gcp-vm-disk-loss.png and b/mkdocs/docs/experiments/images/gcp-vm-disk-loss.png differ
diff --git a/mkdocs/docs/experiments/images/gcp-vm-instance-stop.png b/mkdocs/docs/experiments/images/gcp-vm-instance-stop.png
index 1f2d2b89e76..05fcf222de7 100644
Binary files a/mkdocs/docs/experiments/images/gcp-vm-instance-stop.png and b/mkdocs/docs/experiments/images/gcp-vm-instance-stop.png differ
diff --git a/mkdocs/mkdocs.yml b/mkdocs/mkdocs.yml
index 5614372e637..42f0d218101 100644
--- a/mkdocs/mkdocs.yml
+++ b/mkdocs/mkdocs.yml
@@ -129,8 +129,12 @@ nav:
- Azure Instance Stop: experiments/categories/azure/azure-instance-stop.md
- Azure Disk Loss: experiments/categories/azure/azure-disk-loss.md
- GCP:
- - GCP Instance Stop: experiments/categories/gcp/gcp-vm-instance-stop.md
- - GCP Disk Loss: experiments/categories/gcp/gcp-vm-disk-loss.md
+ - VM Instance:
+ - GCP VM Instance Stop: experiments/categories/gcp/gcp-vm-instance-stop.md
+ - GCP VM Instance Stop By Label: experiments/categories/gcp/gcp-vm-instance-stop-by-label.md
+ - VM Disk:
+ - GCP VM Disk Loss: experiments/categories/gcp/gcp-vm-disk-loss.md
+ - GCP VM Disk Loss By Label: experiments/categories/gcp/gcp-vm-disk-loss-by-label.md
- Concepts:
- Chaos Resources:
- Contents: experiments/concepts/chaos-resources/contents.md