This repository contains all the files required to run the test comparing Falco , Tetragon , Kubearmor and Tracee
this repository will use solutions to measure the usage of those agents, we will rely on:
-
the OpenTelemetry Demo
-
ungard application
-
Goat application to generate security violation
-
All the observability data generated by the environment would be sent to Dynatrace.
The following tools need to be install on your machine :
- jq
- kubectl
- git
- gcloud ( if you are using GKE)
- Helm
PROJECT_ID="<your-project-id>"
gcloud services enable container.googleapis.com --project ${PROJECT_ID}
gcloud services enable monitoring.googleapis.com \
cloudtrace.googleapis.com \
clouddebugger.googleapis.com \
cloudprofiler.googleapis.com \
--project ${PROJECT_ID}
ZONE=europe-west3-a
NAME=isitobservable-securitybenchmark
gcloud container clusters create ${NAME} --zone=${ZONE} --machine-type=e2-standard-4 --num-nodes=2
git clone https://github.com/isitobservable/runtimesecuritybenchmark
cd runtimesecuritybenchmark
If you don't have any Dynatrace tenant , then I suggest to create a trial using the following link : Dynatrace Trial
Once you have your Tenant save the Dynatrace tenant url in the variable DT_TENANT_URL
(for example : https://dedededfrf.live.dynatrace.com)
DT_TENANT_URL=<YOUR TENANT Host>
The dynatrace operator will require to have several tokens:
- Token to deploy and configure the various components
- Token to ingest metrics and Traces
One for the operator having the following scope:
- Create ActiveGate tokens
- Read entities
- Read Settings
- Write Settings
- Access problem and event feed, metrics and topology
- Read configuration
- Write configuration
- Paas integration - installer downloader
Save the value of the token . We will use it later to store in a k8S secret
API_TOKEN=<YOUR TOKEN VALUE>
Create a Dynatrace token with the following scope:
- Ingest metrics (metrics.ingest)
- Ingest logs (logs.ingest)
- Ingest events (events.ingest)
- Ingest OpenTelemetry
- Read metrics
DATA_INGEST_TOKEN=<YOUR TOKEN VALUE>
The application will deploy the entire environment:
TYPE=nothing
chmod 777 deployment.sh
./deployment.sh --clustername "${NAME}" --dturl "${DT_TENANT_URL}" --dtingesttoken "${DATA_INGEST_TOKEN}" --dtoperatortoken "${API_TOKEN}" --type "${TYPE}"
Wait 30min before launching the load test agains the applications:
kubectl apply -f opentelemetry/loadtest_job.yaml -n otel-demo
The application will deploy the entire environment:
kubectl delete -f opentelemetry/loadtest_job.yaml -n otel-demo
OLD=$TYPE
TYPE=falco
chmod 777 update.sh
./update.sh --type "${TYPE}" --previous "${OLD}"
Wait 30min before launching the load test agains the applications:
kubectl apply -f opentelemetry/loadtest_job.yaml -n otel-demo
The application will deploy the entire environment:
kubectl delete -f opentelemetry/loadtest_job.yaml -n otel-demo
OLD=$TYPE
TYPE=tetragon
chmod 777 update.sh
./update.sh --type "${TYPE}" --previous "${OLD}"
Wait 30min before launching the load test agains the applications:
kubectl apply -f opentelemetry/loadtest_job.yaml -n otel-demo
Let's measure without tracing policies
kubectl delete -f opentelemetry/loadtest_job.yaml -n otel-demo
kubectl delete -k tetragon
kubectl apply -f opentelemetry/loadtest_job.yaml -n otel-demo
The application will deploy the entire environment:
kubectl delete -f opentelemetry/loadtest_job.yaml -n otel-demo
OLD=$TYPE
TYPE=kubearmor
chmod 777 update.sh
./update.sh --type "${TYPE}" --previous "${OLD}"
Modifiy the relay server to send events and alerts in the logs of kubearmor
kubectl edit deployment kubearmor-relay -n kubearmor
ENABLE_STDOUT_LOGS
, ENABLE_STDOUT_ALERTS
and ENABLE_STDOUT_MSGS
needs to be equal to true
Wait 30min before launching the load test agains the applications:
kubectl apply -f opentelemetry/loadtest_job.yaml -n otel-demo
Let's measure without tracing policies
kubectl delete -f opentelemetry/loadtest_job.yaml -n otel-demo
kubectl delete -k tetragon
kubectl apply -f opentelemetry/loadtest_job.yaml -n otel-demo
Let's measure with events produced in several namesapces
kubectl delete -f opentelemetry/loadtest_job.yaml -n otel-demo
kubectl annotate ns otel-demo kubearmor-visibility=network,file,network,capabilities --overwrite
kubectl annotate ns goat-app kubearmor-visibility=network,file,network,capabilities --overwrite
kubectl annotate ns unguard kubearmor-visibility=network,file,network,capabilities --overwrite
kubectl annotate ns default kubearmor-visibility=network,file,network,capabilities --overwrite
kubectl apply -f opentelemetry/loadtest_job.yaml -n otel-demo
Let's measure with events produced and Kubermor policies in several namesapces
kubectl delete -f opentelemetry/loadtest_job.yaml -n otel-demo
kubectl apply -k kubearmor/policies
kubectl apply -f opentelemetry/loadtest_job.yaml -n otel-demo
The application will deploy the entire environment:
kubectl delete -f opentelemetry/loadtest_job.yaml -n otel-demo
OLD=$TYPE
TYPE=tracee
chmod 777 update.sh
./update.sh --type "${TYPE}" --previous "${OLD}"
Wait 30min before launching the load test agains the applications:
kubectl apply -f opentelemetry/loadtest_job.yaml -n otel-demo
if you are having issue with lots of workload in pending state. It is related to the unguard cronjob that create to many jobs after a while. TO resolve this you will need to run the following command:
kubectl delete -f unguard/cronjob.yaml -n unguard
once all the pods has been removed you can re-create the cronjob:
kubectl apply -f unguard/cronjob.yaml -n unguard