Skip to content

Commit

Permalink
Merge pull request #180 from weaveworks/readme-enhancements
Browse files Browse the repository at this point in the history
Upgrade api version in sample policies
  • Loading branch information
serboctor authored Jun 1, 2023
2 parents 7781104 + 3ea3a2c commit 68162fc
Show file tree
Hide file tree
Showing 7 changed files with 105 additions and 110 deletions.
36 changes: 16 additions & 20 deletions docs/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,16 +14,10 @@ If you are not using flux, you need to have both [Helm](https://helm.sh/docs/int

By default, the policy agent is configured to enforce policies using kubernetes admisson controller, and publish the violation events to Kubernetes Events. For advanced configurations, please check [here](../helm/values.yaml).

To install Weave Policy Agent, you can use Flux and HelmRelease as part of GitOps ecosystem, or you can directly install the agent using just Helm.
To install Weave Policy Agent, you can use Flux and HelmRelease as part of GitOps ecosystem, or you can directly install the agent using just Helm.

### Using HelmRelease and Flux

Create `policy-system` namespace to install the chart in

```bash
kubectl create ns policy-system
```

In your flux repo in the cluster's directory, create the following `HelmRepository` and `HelmRelease` manifests that reference the policy helm chart, push the new files to your repository.

Note: You can create these manifests in another directory, just make sure the directory is getting reconciled by flux.
Expand Down Expand Up @@ -86,6 +80,8 @@ spec:
version: 2.3.0
interval: 10m0s
targetNamespace: policy-system
install:
createNamespace: true
values:
caCertificate: ""
certificate: ""
Expand Down Expand Up @@ -148,9 +144,9 @@ Check the installation status using the below command, you should expect the pod

## Installing Policies

Weave Policy Agent uses policies custom resources to validate resources compliance. Policy custom resource follows this definition ([Policy CRD](../helm/crds/pac.weave.works_policies.yaml)), and it consists of policy code and policy meta data. Policy code is written in OPA Rego Language.
Weave Policy Agent uses policies custom resources to validate resources compliance. Policy custom resource follows this definition ([Policy CRD](../helm/crds/pac.weave.works_policies.yaml)), and it consists of policy code and policy meta data. Policy code is written in OPA Rego Language.

To get started, you can use the default policies found [here](../policies/), which covers some kubernetes and flux best practices.
To get started, you can use the default policies found [here](../policies/), which covers some kubernetes and flux best practices.

### Using Flux Kustomization

Expand Down Expand Up @@ -194,7 +190,7 @@ Create `policies` directory and create the following `kustomization.yaml` file,

``` bash
kubectl apply -k policies
```
```

<details>
<summary>kustomization.yaml - Click to expand .. </summary>
Expand All @@ -218,17 +214,17 @@ kubectl get policies

### View Policies in WeaveGitOps

If you have WeaveGitOps UI installed on your cluster, you can use it to explore the policies installed on the cluster, as well as, explore the details on each policy.
If you have WeaveGitOps UI installed on your cluster, you can use it to explore the policies installed on the cluster, as well as, explore the details on each policy.

<!-- ![Policies](imgs/policies.png) -->

## Explore Violations

With the agent and policies installed, Weave Policy Agent will prevent any resource that violate the relevant polices from being created or updated.
With the agent and policies installed, Weave Policy Agent will prevent any resource that violate the relevant polices from being created or updated.

When using flux, flux reconcilation will fail if one of your application resources is violating any of the policies.
When using flux, flux reconcilation will fail if one of your application resources is violating any of the policies.

You should be able to see an error like this:
You should be able to see an error like this:

<details>
<summary>Admission controller violation error - Click to expand .. </summary>
Expand All @@ -240,7 +236,7 @@ You should be able to see an error like this:
Entity : deployment/nginx-deployment in namespace: default
Occurrences:
- Replica count must be greater than or equal to '2'; found '1'.
): error when creating "deployment.yaml": admission webhook "admission.agent.weaveworks" denied the request:
): error when creating "deployment.yaml": admission webhook "admission.agent.weaveworks" denied the request:
==================================================================
Policy : weave.policies.containers-minimum-replica-count
Entity : deployment/nginx-deployment in namespace: default
Expand All @@ -251,7 +247,7 @@ You should be able to see an error like this:
</details>

### Violating Deployment Example
If you don't have a violating application/resource on your cluster, you can use the following Deployment as an example to try the agent out.
If you don't have a violating application/resource on your cluster, you can use the following Deployment as an example to try the agent out.

This deployment is violating `Containers Minimum Replica Count` policy by having 1 replicas instead of min. 2 replicas.

Expand Down Expand Up @@ -297,19 +293,19 @@ Since Kubernetes events are configured as a sink for the admission mode, you can

### Check violations via WeaveGitOps UI

If you have WeaveGitOps UI installed, you can find each policy violations listed in Violations tab inside each policy.
If you have WeaveGitOps UI installed, you can find each policy violations listed in Violations tab inside each policy.

<!-- ![WeaveGitOps UI](imgs/violations.png) -->

## Fix Policy Violations

Your next step is to start fix policy violations, for that you can follow the remediation steps listed in each policy, apply them to the violating resources, and re-apply the resource or let flux sync the updated manifest.

Remediation steps are aavailable in the policy custom resource `yaml`, under the `how_to_resolve` section.
Remediation steps are aavailable in the policy custom resource `yaml`, under the `how_to_resolve` section.

![how to solve](./imgs/how-to-solve.png)

The remediation steps also are viewable using WeaveGitOps UI in each policy page.
The remediation steps also are viewable using WeaveGitOps UI in each policy page.

<!-- ![how to solve](./imgs/how-to-solve-2.png) -->

Expand All @@ -319,7 +315,7 @@ To fix the violation on the deployment example, simply update the `replicas` cou

## Exclude Namespaces

Usually, you will have certain namespaces that you need to be excluded from policy evaluation, because they are vital to how your cluster operate and you don't want them affected by policy violations, for example `kube-system` and `flux-system`.
Usually, you will have certain namespaces that you need to be excluded from policy evaluation, because they are vital to how your cluster operate and you don't want them affected by policy violations, for example `kube-system` and `flux-system`.

To prevent the agent from scanning certain namespaces and stop deployments, you can add these namespaces to `excludeNamespaces` in the Policy Agent helm chart values file.

Expand Down
7 changes: 3 additions & 4 deletions docs/policy.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,13 +11,13 @@ You can find the cutom resource schema [here](../config/crd/bases/pac.weave.work

## Policy Library

Here is the Weaveworks [Policy Library](https://github.com/weaveworks/policy-library)
Weaveworks offers an extensive policy library to Weave GitOps Assured and Enterprise customers. The library contains over 150 policies that cover security, best practices, and standards like SOC2, GDPR, PCI-DSS, HIPAA, Mitre Attack, and more.

## Tenant Policy

It is used in [Multi Tenancy](https://docs.gitops.weave.works/docs/enterprise/multi-tenancy/) feature in [Weave GitOps Enterprise](https://docs.gitops.weave.works/docs/enterprise/intro/)

Tenant policies has a special tag `tenancy`.
Tenant policies has a special tag `tenancy`.

## Mutating Resources

Expand All @@ -28,7 +28,7 @@ Starting from version `v2.2.0`, the policy agent will support mutating resources

To enable mutating resources policies must have field `mutate` set to `true` and the rego code should return the `violating_key` and the `recommended_value` in the violation response. The mutation webhook will use the `violating_key` and `recommended_value` to mutate the resource and return the new mutated resource.

Example
Example

```
result = {
Expand All @@ -38,4 +38,3 @@ result = {
"recommended_value": min_replica_count
}
```

6 changes: 3 additions & 3 deletions policies/ControllerContainerAllowingPrivilegeEscalation.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
apiVersion: pac.weave.works/v2beta1
apiVersion: pac.weave.works/v2beta2
kind: Policy
metadata:
name: weave.policies.containers-running-with-privilege-escalation
Expand Down Expand Up @@ -106,6 +106,6 @@ spec:
}
isExcludedNamespace = true {
controller_input.metadata.namespace
controller_input.metadata.namespace in exclude_namespaces
controller_input.metadata.namespace
controller_input.metadata.namespace in exclude_namespaces
} else = false
28 changes: 14 additions & 14 deletions policies/ControllerContainerBlockSysctls.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
apiVersion: pac.weave.works/v2beta1
apiVersion: pac.weave.works/v2beta2
kind: Policy
metadata:
name: weave.policies.container-block-sysctl
Expand Down Expand Up @@ -54,33 +54,33 @@ spec:
exclude_label_value := input.parameters.exclude_label_value
violation[result] {
isExcludedNamespace == false
not exclude_label_value == controller_input.metadata.labels[exclude_label_key]
isExcludedNamespace == false
not exclude_label_value == controller_input.metadata.labels[exclude_label_key]
controller_spec.securityContext.sysctls
result = {
"issue detected": true,
"msg": "Adding sysctls could lead to unauthorized escalated privileges to the underlying node",
"violating_key": "spec.template.spec.securityContext.sysctls"
}
result = {
"issue detected": true,
"msg": "Adding sysctls could lead to unauthorized escalated privileges to the underlying node",
"violating_key": "spec.template.spec.securityContext.sysctls"
}
}
###### Functions
isArrayContains(array, str) {
array[_] = str
array[_] = str
}
# Initial Setup
controller_input = input.review.object
controller_spec = controller_input.spec.template.spec {
isArrayContains({"StatefulSet", "DaemonSet", "Deployment", "Job", "ReplicaSet"}, controller_input.kind)
isArrayContains({"StatefulSet", "DaemonSet", "Deployment", "Job", "ReplicaSet"}, controller_input.kind)
} else = controller_input.spec {
controller_input.kind == "Pod"
controller_input.kind == "Pod"
} else = controller_input.spec.jobTemplate.spec.template.spec {
controller_input.kind == "CronJob"
controller_input.kind == "CronJob"
}
isExcludedNamespace = true {
controller_input.metadata.namespace
controller_input.metadata.namespace in exclude_namespaces
controller_input.metadata.namespace
controller_input.metadata.namespace in exclude_namespaces
} else = false
102 changes: 51 additions & 51 deletions policies/ControllerContainerRunningAsRoot.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
apiVersion: pac.weave.works/v2beta1
apiVersion: pac.weave.works/v2beta2
kind: Policy
metadata:
name: weave.policies.container-running-as-root
Expand Down Expand Up @@ -55,77 +55,77 @@ spec:
# Check for missing securityContext.runAsNonRoot (missing in both, pod and container)
violation[result] {
isExcludedNamespace == false
not exclude_label_value == controller_input.metadata.labels[exclude_label_key]
controller_spec.securityContext
not controller_spec.securityContext.runAsNonRoot
not controller_spec.securityContext.runAsNonRoot == false
some i
containers := controller_spec.containers[i]
containers.securityContext
not containers.securityContext.runAsNonRoot
not containers.securityContext.runAsNonRoot == false
result = {
"issue detected": true,
"msg": sprintf("Container missing spec.template.spec.containers[%v].securityContext.runAsNonRoot while Pod spec.template.spec.securityContext.runAsNonRoot is not defined as well.", [i]),
"violating_key": sprintf("spec.template.spec.containers[%v].securityContext", [i]),
}
isExcludedNamespace == false
not exclude_label_value == controller_input.metadata.labels[exclude_label_key]
controller_spec.securityContext
not controller_spec.securityContext.runAsNonRoot
not controller_spec.securityContext.runAsNonRoot == false
some i
containers := controller_spec.containers[i]
containers.securityContext
not containers.securityContext.runAsNonRoot
not containers.securityContext.runAsNonRoot == false
result = {
"issue detected": true,
"msg": sprintf("Container missing spec.template.spec.containers[%v].securityContext.runAsNonRoot while Pod spec.template.spec.securityContext.runAsNonRoot is not defined as well.", [i]),
"violating_key": sprintf("spec.template.spec.containers[%v].securityContext", [i]),
}
}
# Container security context
# Check if containers.securityContext.runAsNonRoot exists and = false
violation[result] {
isExcludedNamespace == false
not exclude_label_value == controller_input.metadata.labels[exclude_label_key]
some i
containers := controller_spec.containers[i]
containers.securityContext
containers.securityContext.runAsNonRoot == false
result = {
"issue detected": true,
"msg": sprintf("Container spec.template.spec.containers[%v].securityContext.runAsNonRoot should be set to true ", [i]),
"violating_key": sprintf("spec.template.spec.containers[%v].securityContext.runAsNonRoot", [i]),
"recommended_value": true,
}
isExcludedNamespace == false
not exclude_label_value == controller_input.metadata.labels[exclude_label_key]
some i
containers := controller_spec.containers[i]
containers.securityContext
containers.securityContext.runAsNonRoot == false
result = {
"issue detected": true,
"msg": sprintf("Container spec.template.spec.containers[%v].securityContext.runAsNonRoot should be set to true ", [i]),
"violating_key": sprintf("spec.template.spec.containers[%v].securityContext.runAsNonRoot", [i]),
"recommended_value": true,
}
}
# Pod security context
# Check if spec.securityContext.runAsNonRoot exists and = false
violation[result] {
isExcludedNamespace == false
not exclude_label_value == controller_input.metadata.labels[exclude_label_key]
controller_spec.securityContext
controller_spec.securityContext.runAsNonRoot == false
result = {
"issue detected": true,
"msg": "Pod spec.template.spec.securityContext.runAsNonRoot should be set to true",
"violating_key": "spec.template.spec.securityContext.runAsNonRoot",
"recommended_value": true,
}
isExcludedNamespace == false
not exclude_label_value == controller_input.metadata.labels[exclude_label_key]
controller_spec.securityContext
controller_spec.securityContext.runAsNonRoot == false
result = {
"issue detected": true,
"msg": "Pod spec.template.spec.securityContext.runAsNonRoot should be set to true",
"violating_key": "spec.template.spec.securityContext.runAsNonRoot",
"recommended_value": true,
}
}
controller_input = input.review.object
controller_spec = controller_input.spec.template.spec {
contains(controller_input.kind, {"StatefulSet", "DaemonSet", "Deployment", "Job", "ReplicaSet"})
contains(controller_input.kind, {"StatefulSet", "DaemonSet", "Deployment", "Job", "ReplicaSet"})
} else = controller_input.spec {
controller_input.kind == "Pod"
controller_input.kind == "Pod"
} else = controller_input.spec.jobTemplate.spec.template.spec {
controller_input.kind == "CronJob"
controller_input.kind == "CronJob"
}
contains(kind, kinds) {
kinds[_] = kind
kinds[_] = kind
}
isExcludedNamespace = true {
controller_input.metadata.namespace
controller_input.metadata.namespace in exclude_namespaces
controller_input.metadata.namespace
controller_input.metadata.namespace in exclude_namespaces
} else = false
Loading

0 comments on commit 68162fc

Please sign in to comment.