From 54a2aa0d9999867a2a719c7a67bccb3933a2a7b5 Mon Sep 17 00:00:00 2001 From: M Obaydullah <52800314+obaydullahmhs@users.noreply.github.com> Date: Mon, 9 Sep 2024 12:02:49 +0600 Subject: [PATCH] Add Kafka Opsrequest and Autoscaler Docs (#656) * Add Kafka Opsrequest and Autoscaler Docs Signed-off-by: obaydullahmhs --- .../compute/kafka-broker-autoscaler.yaml | 24 + .../compute/kafka-combined-autoscaler.yaml | 24 + .../compute/kafka-controller-autoscaler.yaml | 24 + .../kafka/autoscaler/kafka-combined.yaml | 27 + .../kafka/autoscaler/kafka-topology.yaml | 48 + .../kafka-storage-autoscaler-combined.yaml | 14 + .../kafka-storage-autoscaler-topology.yaml | 19 + .../configuration/configsecret-combined.yaml | 9 + .../configuration/configsecret-topology.yaml | 11 + .../kafka/configuration/kafka-combined.yaml | 19 + .../kafka/configuration/kafka-topology.yaml | 30 + .../connectcluster-quickstart.yaml} | 0 .../mongodb-source-connector.yaml | 0 .../kafka/monitoring/kafka-builtin-prom.yaml | 26 + .../kafka/monitoring/kf-with-monitoring.yaml | 2 +- .../kafka/reconfigure-tls/kafka-add-tls.yaml | 23 + .../kafka/reconfigure-tls/kafka-issuer.yaml | 8 + .../reconfigure-tls/kafka-new-issuer.yaml | 8 + .../reconfigure-tls/kafka-remove-tls.yaml | 11 + .../kafka/reconfigure-tls/kafka-rotate.yaml | 11 + .../kafka-update-tls-issuer.yaml | 14 + .../examples/kafka/reconfigure-tls/kafka.yaml | 28 + .../kafka-combined-custom-config.yaml | 8 + .../kafka/reconfigure/kafka-combined.yaml | 19 + .../kafka-reconfigure-apply-combined.yaml | 15 + .../kafka-reconfigure-apply-topology.yaml | 18 + .../kafka-reconfigure-update-combined.yaml | 14 + .../kafka-reconfigure-update-topology.yaml | 14 + .../kafka-topology-custom-config.yaml | 10 + .../kafka/reconfigure/kafka-topology.yaml | 30 + .../new-kafka-combined-custom-config.yaml | 8 + .../new-kafka-topology-custom-config.yaml | 8 + docs/examples/kafka/restart/kafka.yaml | 44 + docs/examples/kafka/restart/ops.yaml | 11 + .../kafka/restproxy/restproxy-quickstart.yaml | 12 + .../kafka-hscale-down-combined.yaml | 11 + .../kafka-hscale-down-topology.yaml | 13 + .../kafka-hscale-up-combined.yaml | 11 + .../kafka-hscale-up-topology.yaml | 13 + .../kafka/scaling/kafka-combined.yaml | 17 + .../kafka/scaling/kafka-topology.yaml | 28 + .../kafka-vertical-scaling-combined.yaml | 20 + .../kafka-vertical-scaling-topology.yaml | 28 + .../schemaregistry-apicurio.yaml | 12 + .../kafka/tls/connectcluster-issuer.yaml | 8 + .../kafka/tls/connectcluster-tls.yaml | 21 + docs/examples/kafka/tls/kafka-dev-tls.yaml | 23 + docs/examples/kafka/tls/kafka-prod-tls.yaml | 34 + docs/examples/kafka/update-version/kafka.yaml | 44 + .../update-version/update-version-ops.yaml | 13 + .../volume-expansion/kafka-combined.yaml | 17 + .../volume-expansion/kafka-topology.yaml | 28 + .../kafka-volume-expansion-combined.yaml | 12 + .../kafka-volume-expansion-topology.yaml | 13 + docs/guides/kafka/README.md | 4 +- docs/guides/kafka/autoscaler/_index.md | 10 + .../guides/kafka/autoscaler/compute/_index.md | 10 + .../kafka/autoscaler/compute/combined.md | 469 +++++++ .../kafka/autoscaler/compute/overview.md | 55 + .../kafka/autoscaler/compute/topology.md | 852 ++++++++++++ .../guides/kafka/autoscaler/storage/_index.md | 10 + .../autoscaler/storage/kafka-combined.md | 469 +++++++ .../autoscaler/storage/kafka-topology.md | 684 ++++++++++ .../kafka/autoscaler/storage/overview.md | 57 + docs/guides/kafka/cli/cli.md | 10 +- .../clustering/topology-cluster/index.md | 47 +- docs/guides/kafka/concepts/appbinding.md | 2 +- docs/guides/kafka/concepts/connectcluster.md | 2 +- docs/guides/kafka/concepts/connector.md | 6 +- docs/guides/kafka/concepts/kafka.md | 3 +- docs/guides/kafka/concepts/kafkaautoscaler.md | 164 +++ .../kafka/concepts/kafkaconnectorversion.md | 4 +- docs/guides/kafka/concepts/kafkaopsrequest.md | 622 +++++++++ docs/guides/kafka/concepts/kafkaversion.md | 4 +- docs/guides/kafka/concepts/restproxy.md | 163 +++ docs/guides/kafka/concepts/schemaregistry.md | 163 +++ .../kafka/concepts/schemaregistryversion.md | 93 ++ docs/guides/kafka/configuration/_index.md | 10 + .../kafka/configuration/kafka-combined.md | 164 +++ .../kafka/configuration/kafka-topology.md | 204 +++ .../kafka/connectcluster/connectcluster.md | 10 +- .../index.md => connectcluster/overview.md} | 32 +- .../monitoring/using-builtin-prometheus.md | 371 ++++++ .../quickstart/{overview => }/kafka/index.md | 16 +- .../{overview => }/kafka/yamls/kafka-v1.yaml | 0 .../kafka/yamls/kafka-v1alpha2.yaml | 0 .../kafka/quickstart/overview/_index.md | 10 - docs/guides/kafka/reconfigure-tls/_index.md | 10 + docs/guides/kafka/reconfigure-tls/kafka.md | 1088 ++++++++++++++++ docs/guides/kafka/reconfigure-tls/overview.md | 54 + docs/guides/kafka/reconfigure/_index.md | 10 + .../kafka/reconfigure/kafka-combined.md | 506 ++++++++ .../kafka/reconfigure/kafka-topology.md | 625 +++++++++ docs/guides/kafka/reconfigure/overview.md | 54 + docs/guides/kafka/restart/_index.md | 10 + docs/guides/kafka/restart/restart.md | 252 ++++ docs/guides/kafka/restproxy/_index.md | 10 + docs/guides/kafka/restproxy/overview.md | 408 ++++++ docs/guides/kafka/scaling/_index.md | 10 + .../scaling/horizontal-scaling/_index.md | 10 + .../scaling/horizontal-scaling/combined.md | 969 ++++++++++++++ .../scaling/horizontal-scaling/overview.md | 54 + .../scaling/horizontal-scaling/topology.md | 1151 +++++++++++++++++ .../kafka/scaling/vertical-scaling/_index.md | 10 + .../scaling/vertical-scaling/combined.md | 308 +++++ .../scaling/vertical-scaling/overview.md | 54 + .../scaling/vertical-scaling/topology.md | 395 ++++++ docs/guides/kafka/schemaregistry/_index.md | 10 + docs/guides/kafka/schemaregistry/overview.md | 349 +++++ docs/guides/kafka/tls/combined.md | 250 ++++ docs/guides/kafka/tls/connectcluster.md | 224 ++++ docs/guides/kafka/tls/overview.md | 4 +- docs/guides/kafka/tls/topology.md | 253 ++++ docs/guides/kafka/update-version/_index.md | 10 + docs/guides/kafka/update-version/overview.md | 54 + .../kafka/update-version/update-version.md | 339 +++++ docs/guides/kafka/volume-expansion/_index.md | 10 + .../guides/kafka/volume-expansion/combined.md | 312 +++++ .../guides/kafka/volume-expansion/overview.md | 56 + .../guides/kafka/volume-expansion/topology.md | 357 +++++ .../kafka/kf-compute-autoscaling.svg | 148 +++ .../kafka/kf-horizontal-scaling.svg | 100 ++ .../kafka/kf-reconfigure-tls.svg | 100 ++ .../day-2-operation/kafka/kf-reconfigure.svg | 99 ++ .../kafka/kf-storage-autoscaling.svg | 174 +++ .../kafka/kf-update-version.svg | 105 ++ .../kafka/kf-vertical-scaling.svg | 105 ++ .../kafka/kf-volume-expansion.svg | 145 +++ .../monitoring/kafka-builtin-prom-target.png | Bin 0 -> 110573 bytes .../restproxy/restproxy-crd-lifecycle.png | Bin 0 -> 55119 bytes .../schemaregistry-crd-lifecycle.png | Bin 0 -> 59049 bytes .../schemaregistry-ui-apicurio.png | Bin 0 -> 57790 bytes 132 files changed, 14787 insertions(+), 82 deletions(-) create mode 100644 docs/examples/kafka/autoscaler/compute/kafka-broker-autoscaler.yaml create mode 100644 docs/examples/kafka/autoscaler/compute/kafka-combined-autoscaler.yaml create mode 100644 docs/examples/kafka/autoscaler/compute/kafka-controller-autoscaler.yaml create mode 100644 docs/examples/kafka/autoscaler/kafka-combined.yaml create mode 100644 docs/examples/kafka/autoscaler/kafka-topology.yaml create mode 100644 docs/examples/kafka/autoscaler/storage/kafka-storage-autoscaler-combined.yaml create mode 100644 docs/examples/kafka/autoscaler/storage/kafka-storage-autoscaler-topology.yaml create mode 100644 docs/examples/kafka/configuration/configsecret-combined.yaml create mode 100644 docs/examples/kafka/configuration/configsecret-topology.yaml create mode 100644 docs/examples/kafka/configuration/kafka-combined.yaml create mode 100644 docs/examples/kafka/configuration/kafka-topology.yaml rename docs/{guides/kafka/quickstart/overview/connectcluster/yamls/connectcluster.yaml => examples/kafka/connectcluster/connectcluster-quickstart.yaml} (100%) rename docs/{guides/kafka/quickstart/overview/connectcluster/yamls => examples/kafka/connectcluster}/mongodb-source-connector.yaml (100%) create mode 100644 docs/examples/kafka/monitoring/kafka-builtin-prom.yaml create mode 100644 docs/examples/kafka/reconfigure-tls/kafka-add-tls.yaml create mode 100644 docs/examples/kafka/reconfigure-tls/kafka-issuer.yaml create mode 100644 docs/examples/kafka/reconfigure-tls/kafka-new-issuer.yaml create mode 100644 docs/examples/kafka/reconfigure-tls/kafka-remove-tls.yaml create mode 100644 docs/examples/kafka/reconfigure-tls/kafka-rotate.yaml create mode 100644 docs/examples/kafka/reconfigure-tls/kafka-update-tls-issuer.yaml create mode 100644 docs/examples/kafka/reconfigure-tls/kafka.yaml create mode 100644 docs/examples/kafka/reconfigure/kafka-combined-custom-config.yaml create mode 100644 docs/examples/kafka/reconfigure/kafka-combined.yaml create mode 100644 docs/examples/kafka/reconfigure/kafka-reconfigure-apply-combined.yaml create mode 100644 docs/examples/kafka/reconfigure/kafka-reconfigure-apply-topology.yaml create mode 100644 docs/examples/kafka/reconfigure/kafka-reconfigure-update-combined.yaml create mode 100644 docs/examples/kafka/reconfigure/kafka-reconfigure-update-topology.yaml create mode 100644 docs/examples/kafka/reconfigure/kafka-topology-custom-config.yaml create mode 100644 docs/examples/kafka/reconfigure/kafka-topology.yaml create mode 100644 docs/examples/kafka/reconfigure/new-kafka-combined-custom-config.yaml create mode 100644 docs/examples/kafka/reconfigure/new-kafka-topology-custom-config.yaml create mode 100644 docs/examples/kafka/restart/kafka.yaml create mode 100644 docs/examples/kafka/restart/ops.yaml create mode 100644 docs/examples/kafka/restproxy/restproxy-quickstart.yaml create mode 100644 docs/examples/kafka/scaling/horizontal-scaling/kafka-hscale-down-combined.yaml create mode 100644 docs/examples/kafka/scaling/horizontal-scaling/kafka-hscale-down-topology.yaml create mode 100644 docs/examples/kafka/scaling/horizontal-scaling/kafka-hscale-up-combined.yaml create mode 100644 docs/examples/kafka/scaling/horizontal-scaling/kafka-hscale-up-topology.yaml create mode 100644 docs/examples/kafka/scaling/kafka-combined.yaml create mode 100644 docs/examples/kafka/scaling/kafka-topology.yaml create mode 100644 docs/examples/kafka/scaling/vertical-scaling/kafka-vertical-scaling-combined.yaml create mode 100644 docs/examples/kafka/scaling/vertical-scaling/kafka-vertical-scaling-topology.yaml create mode 100644 docs/examples/kafka/schemaregistry/schemaregistry-apicurio.yaml create mode 100644 docs/examples/kafka/tls/connectcluster-issuer.yaml create mode 100644 docs/examples/kafka/tls/connectcluster-tls.yaml create mode 100644 docs/examples/kafka/tls/kafka-dev-tls.yaml create mode 100644 docs/examples/kafka/tls/kafka-prod-tls.yaml create mode 100644 docs/examples/kafka/update-version/kafka.yaml create mode 100644 docs/examples/kafka/update-version/update-version-ops.yaml create mode 100644 docs/examples/kafka/volume-expansion/kafka-combined.yaml create mode 100644 docs/examples/kafka/volume-expansion/kafka-topology.yaml create mode 100644 docs/examples/kafka/volume-expansion/kafka-volume-expansion-combined.yaml create mode 100644 docs/examples/kafka/volume-expansion/kafka-volume-expansion-topology.yaml create mode 100644 docs/guides/kafka/autoscaler/_index.md create mode 100644 docs/guides/kafka/autoscaler/compute/_index.md create mode 100644 docs/guides/kafka/autoscaler/compute/combined.md create mode 100644 docs/guides/kafka/autoscaler/compute/overview.md create mode 100644 docs/guides/kafka/autoscaler/compute/topology.md create mode 100644 docs/guides/kafka/autoscaler/storage/_index.md create mode 100644 docs/guides/kafka/autoscaler/storage/kafka-combined.md create mode 100644 docs/guides/kafka/autoscaler/storage/kafka-topology.md create mode 100644 docs/guides/kafka/autoscaler/storage/overview.md create mode 100644 docs/guides/kafka/concepts/kafkaautoscaler.md create mode 100644 docs/guides/kafka/concepts/kafkaopsrequest.md create mode 100644 docs/guides/kafka/concepts/restproxy.md create mode 100644 docs/guides/kafka/concepts/schemaregistry.md create mode 100644 docs/guides/kafka/concepts/schemaregistryversion.md create mode 100644 docs/guides/kafka/configuration/_index.md create mode 100644 docs/guides/kafka/configuration/kafka-combined.md create mode 100644 docs/guides/kafka/configuration/kafka-topology.md rename docs/guides/kafka/{quickstart/overview/connectcluster/index.md => connectcluster/overview.md} (94%) create mode 100644 docs/guides/kafka/monitoring/using-builtin-prometheus.md rename docs/guides/kafka/quickstart/{overview => }/kafka/index.md (96%) rename docs/guides/kafka/quickstart/{overview => }/kafka/yamls/kafka-v1.yaml (100%) rename docs/guides/kafka/quickstart/{overview => }/kafka/yamls/kafka-v1alpha2.yaml (100%) delete mode 100644 docs/guides/kafka/quickstart/overview/_index.md create mode 100644 docs/guides/kafka/reconfigure-tls/_index.md create mode 100644 docs/guides/kafka/reconfigure-tls/kafka.md create mode 100644 docs/guides/kafka/reconfigure-tls/overview.md create mode 100644 docs/guides/kafka/reconfigure/_index.md create mode 100644 docs/guides/kafka/reconfigure/kafka-combined.md create mode 100644 docs/guides/kafka/reconfigure/kafka-topology.md create mode 100644 docs/guides/kafka/reconfigure/overview.md create mode 100644 docs/guides/kafka/restart/_index.md create mode 100644 docs/guides/kafka/restart/restart.md create mode 100644 docs/guides/kafka/restproxy/_index.md create mode 100644 docs/guides/kafka/restproxy/overview.md create mode 100644 docs/guides/kafka/scaling/_index.md create mode 100644 docs/guides/kafka/scaling/horizontal-scaling/_index.md create mode 100644 docs/guides/kafka/scaling/horizontal-scaling/combined.md create mode 100644 docs/guides/kafka/scaling/horizontal-scaling/overview.md create mode 100644 docs/guides/kafka/scaling/horizontal-scaling/topology.md create mode 100644 docs/guides/kafka/scaling/vertical-scaling/_index.md create mode 100644 docs/guides/kafka/scaling/vertical-scaling/combined.md create mode 100644 docs/guides/kafka/scaling/vertical-scaling/overview.md create mode 100644 docs/guides/kafka/scaling/vertical-scaling/topology.md create mode 100644 docs/guides/kafka/schemaregistry/_index.md create mode 100644 docs/guides/kafka/schemaregistry/overview.md create mode 100644 docs/guides/kafka/tls/combined.md create mode 100644 docs/guides/kafka/tls/connectcluster.md create mode 100644 docs/guides/kafka/tls/topology.md create mode 100644 docs/guides/kafka/update-version/_index.md create mode 100644 docs/guides/kafka/update-version/overview.md create mode 100644 docs/guides/kafka/update-version/update-version.md create mode 100644 docs/guides/kafka/volume-expansion/_index.md create mode 100644 docs/guides/kafka/volume-expansion/combined.md create mode 100644 docs/guides/kafka/volume-expansion/overview.md create mode 100644 docs/guides/kafka/volume-expansion/topology.md create mode 100644 docs/images/day-2-operation/kafka/kf-compute-autoscaling.svg create mode 100644 docs/images/day-2-operation/kafka/kf-horizontal-scaling.svg create mode 100644 docs/images/day-2-operation/kafka/kf-reconfigure-tls.svg create mode 100644 docs/images/day-2-operation/kafka/kf-reconfigure.svg create mode 100644 docs/images/day-2-operation/kafka/kf-storage-autoscaling.svg create mode 100644 docs/images/day-2-operation/kafka/kf-update-version.svg create mode 100644 docs/images/day-2-operation/kafka/kf-vertical-scaling.svg create mode 100644 docs/images/day-2-operation/kafka/kf-volume-expansion.svg create mode 100644 docs/images/kafka/monitoring/kafka-builtin-prom-target.png create mode 100644 docs/images/kafka/restproxy/restproxy-crd-lifecycle.png create mode 100644 docs/images/kafka/schemaregistry/schemaregistry-crd-lifecycle.png create mode 100644 docs/images/kafka/schemaregistry/schemaregistry-ui-apicurio.png diff --git a/docs/examples/kafka/autoscaler/compute/kafka-broker-autoscaler.yaml b/docs/examples/kafka/autoscaler/compute/kafka-broker-autoscaler.yaml new file mode 100644 index 0000000000..1e2a722a5b --- /dev/null +++ b/docs/examples/kafka/autoscaler/compute/kafka-broker-autoscaler.yaml @@ -0,0 +1,24 @@ +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: KafkaAutoscaler +metadata: + name: kf-broker-autoscaler + namespace: demo +spec: + databaseRef: + name: kafka-prod + opsRequestOptions: + timeout: 5m + apply: IfReady + compute: + broker: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 20 + minAllowed: + cpu: 600m + memory: 1.5Gi + maxAllowed: + cpu: 1 + memory: 2Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" \ No newline at end of file diff --git a/docs/examples/kafka/autoscaler/compute/kafka-combined-autoscaler.yaml b/docs/examples/kafka/autoscaler/compute/kafka-combined-autoscaler.yaml new file mode 100644 index 0000000000..b2c2e663b2 --- /dev/null +++ b/docs/examples/kafka/autoscaler/compute/kafka-combined-autoscaler.yaml @@ -0,0 +1,24 @@ +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: KafkaAutoscaler +metadata: + name: kf-combined-autoscaler + namespace: demo +spec: + databaseRef: + name: kafka-dev + opsRequestOptions: + timeout: 5m + apply: IfReady + compute: + node: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 20 + minAllowed: + cpu: 600m + memory: 1.5Gi + maxAllowed: + cpu: 1 + memory: 2Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" \ No newline at end of file diff --git a/docs/examples/kafka/autoscaler/compute/kafka-controller-autoscaler.yaml b/docs/examples/kafka/autoscaler/compute/kafka-controller-autoscaler.yaml new file mode 100644 index 0000000000..e6e4999059 --- /dev/null +++ b/docs/examples/kafka/autoscaler/compute/kafka-controller-autoscaler.yaml @@ -0,0 +1,24 @@ +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: KafkaAutoscaler +metadata: + name: kf-controller-autoscaler + namespace: demo +spec: + databaseRef: + name: kafka-prod + opsRequestOptions: + timeout: 5m + apply: IfReady + compute: + controller: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 20 + minAllowed: + cpu: 600m + memory: 1.5Gi + maxAllowed: + cpu: 1 + memory: 2Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" \ No newline at end of file diff --git a/docs/examples/kafka/autoscaler/kafka-combined.yaml b/docs/examples/kafka/autoscaler/kafka-combined.yaml new file mode 100644 index 0000000000..d674b750b6 --- /dev/null +++ b/docs/examples/kafka/autoscaler/kafka-combined.yaml @@ -0,0 +1,27 @@ +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-dev + namespace: demo +spec: + replicas: 2 + version: 3.6.1 + podTemplate: + spec: + containers: + - name: kafka + resources: + limits: + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +# storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut \ No newline at end of file diff --git a/docs/examples/kafka/autoscaler/kafka-topology.yaml b/docs/examples/kafka/autoscaler/kafka-topology.yaml new file mode 100644 index 0000000000..9b2fd98558 --- /dev/null +++ b/docs/examples/kafka/autoscaler/kafka-topology.yaml @@ -0,0 +1,48 @@ +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-prod + namespace: demo +spec: + version: 3.6.1 + topology: + broker: + replicas: 2 + podTemplate: + spec: + containers: + - name: kafka + resources: + limits: + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + controller: + replicas: 2 + podTemplate: + spec: + containers: + - name: kafka + resources: + limits: + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut \ No newline at end of file diff --git a/docs/examples/kafka/autoscaler/storage/kafka-storage-autoscaler-combined.yaml b/docs/examples/kafka/autoscaler/storage/kafka-storage-autoscaler-combined.yaml new file mode 100644 index 0000000000..8860129c7b --- /dev/null +++ b/docs/examples/kafka/autoscaler/storage/kafka-storage-autoscaler-combined.yaml @@ -0,0 +1,14 @@ +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: KafkaAutoscaler +metadata: + name: kf-storage-autoscaler-combined + namespace: demo +spec: + databaseRef: + name: kafka-dev + storage: + node: + expansionMode: "Online" + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 \ No newline at end of file diff --git a/docs/examples/kafka/autoscaler/storage/kafka-storage-autoscaler-topology.yaml b/docs/examples/kafka/autoscaler/storage/kafka-storage-autoscaler-topology.yaml new file mode 100644 index 0000000000..3800820d63 --- /dev/null +++ b/docs/examples/kafka/autoscaler/storage/kafka-storage-autoscaler-topology.yaml @@ -0,0 +1,19 @@ +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: KafkaAutoscaler +metadata: + name: kf-storage-autoscaler-topology + namespace: demo +spec: + databaseRef: + name: kafka-prod + storage: + broker: + expansionMode: "Online" + trigger: "On" + usageThreshold: 60 + scalingThreshold: 100 + controller: + expansionMode: "Online" + trigger: "On" + usageThreshold: 60 + scalingThreshold: 100 \ No newline at end of file diff --git a/docs/examples/kafka/configuration/configsecret-combined.yaml b/docs/examples/kafka/configuration/configsecret-combined.yaml new file mode 100644 index 0000000000..b32e9c98a7 --- /dev/null +++ b/docs/examples/kafka/configuration/configsecret-combined.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: Secret +metadata: + name: configsecret-combined + namespace: demo +stringData: + server.properties: |- + log.retention.hours=100 + default.replication.factor=2 \ No newline at end of file diff --git a/docs/examples/kafka/configuration/configsecret-topology.yaml b/docs/examples/kafka/configuration/configsecret-topology.yaml new file mode 100644 index 0000000000..c32be5103c --- /dev/null +++ b/docs/examples/kafka/configuration/configsecret-topology.yaml @@ -0,0 +1,11 @@ +apiVersion: v1 +kind: Secret +metadata: + name: configsecret-topology + namespace: demo +stringData: + broker.properties: |- + log.retention.hours=100 + default.replication.factor=2 + controller.properties: |- + metadata.log.dir=/var/log/kafka/metadata-custom \ No newline at end of file diff --git a/docs/examples/kafka/configuration/kafka-combined.yaml b/docs/examples/kafka/configuration/kafka-combined.yaml new file mode 100644 index 0000000000..fd61f4701b --- /dev/null +++ b/docs/examples/kafka/configuration/kafka-combined.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-dev + namespace: demo +spec: + replicas: 2 + version: 3.6.1 + configSecret: + name: configsecret-combined + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut \ No newline at end of file diff --git a/docs/examples/kafka/configuration/kafka-topology.yaml b/docs/examples/kafka/configuration/kafka-topology.yaml new file mode 100644 index 0000000000..6359857f64 --- /dev/null +++ b/docs/examples/kafka/configuration/kafka-topology.yaml @@ -0,0 +1,30 @@ +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-prod + namespace: demo +spec: + version: 3.6.1 + configSecret: + name: configsecret-topology + topology: + broker: + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + controller: + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut \ No newline at end of file diff --git a/docs/guides/kafka/quickstart/overview/connectcluster/yamls/connectcluster.yaml b/docs/examples/kafka/connectcluster/connectcluster-quickstart.yaml similarity index 100% rename from docs/guides/kafka/quickstart/overview/connectcluster/yamls/connectcluster.yaml rename to docs/examples/kafka/connectcluster/connectcluster-quickstart.yaml diff --git a/docs/guides/kafka/quickstart/overview/connectcluster/yamls/mongodb-source-connector.yaml b/docs/examples/kafka/connectcluster/mongodb-source-connector.yaml similarity index 100% rename from docs/guides/kafka/quickstart/overview/connectcluster/yamls/mongodb-source-connector.yaml rename to docs/examples/kafka/connectcluster/mongodb-source-connector.yaml diff --git a/docs/examples/kafka/monitoring/kafka-builtin-prom.yaml b/docs/examples/kafka/monitoring/kafka-builtin-prom.yaml new file mode 100644 index 0000000000..62a0d9683a --- /dev/null +++ b/docs/examples/kafka/monitoring/kafka-builtin-prom.yaml @@ -0,0 +1,26 @@ +apiVersion: kubedb.com/v1alpha2 +kind: Kafka +metadata: + name: kafka-builtin-prom + namespace: demo +spec: + replicas: 2 + version: 3.6.1 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + monitor: + agent: prometheus.io/builtin + prometheus: + exporter: + port: 56790 + serviceMonitor: + labels: + release: prometheus + interval: 10s + deletionPolicy: WipeOut \ No newline at end of file diff --git a/docs/examples/kafka/monitoring/kf-with-monitoring.yaml b/docs/examples/kafka/monitoring/kf-with-monitoring.yaml index b799d9f005..cdce7aa9d4 100644 --- a/docs/examples/kafka/monitoring/kf-with-monitoring.yaml +++ b/docs/examples/kafka/monitoring/kf-with-monitoring.yaml @@ -23,7 +23,7 @@ spec: agent: prometheus.io/operator prometheus: exporter: - port: 9091 + port: 56790 serviceMonitor: labels: release: prometheus diff --git a/docs/examples/kafka/reconfigure-tls/kafka-add-tls.yaml b/docs/examples/kafka/reconfigure-tls/kafka-add-tls.yaml new file mode 100644 index 0000000000..f36789f125 --- /dev/null +++ b/docs/examples/kafka/reconfigure-tls/kafka-add-tls.yaml @@ -0,0 +1,23 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-add-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: kafka-prod + tls: + issuerRef: + name: kf-issuer + kind: Issuer + apiGroup: "cert-manager.io" + certificates: + - alias: client + subject: + organizations: + - kafka + organizationalUnits: + - client + timeout: 5m + apply: IfReady \ No newline at end of file diff --git a/docs/examples/kafka/reconfigure-tls/kafka-issuer.yaml b/docs/examples/kafka/reconfigure-tls/kafka-issuer.yaml new file mode 100644 index 0000000000..912c34fc49 --- /dev/null +++ b/docs/examples/kafka/reconfigure-tls/kafka-issuer.yaml @@ -0,0 +1,8 @@ +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: kf-issuer + namespace: demo +spec: + ca: + secretName: kafka-ca \ No newline at end of file diff --git a/docs/examples/kafka/reconfigure-tls/kafka-new-issuer.yaml b/docs/examples/kafka/reconfigure-tls/kafka-new-issuer.yaml new file mode 100644 index 0000000000..7b9c49d393 --- /dev/null +++ b/docs/examples/kafka/reconfigure-tls/kafka-new-issuer.yaml @@ -0,0 +1,8 @@ +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: kf-new-issuer + namespace: demo +spec: + ca: + secretName: kafka-new-ca \ No newline at end of file diff --git a/docs/examples/kafka/reconfigure-tls/kafka-remove-tls.yaml b/docs/examples/kafka/reconfigure-tls/kafka-remove-tls.yaml new file mode 100644 index 0000000000..4f0622eb90 --- /dev/null +++ b/docs/examples/kafka/reconfigure-tls/kafka-remove-tls.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-remove + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: kafka-prod + tls: + remove: true \ No newline at end of file diff --git a/docs/examples/kafka/reconfigure-tls/kafka-rotate.yaml b/docs/examples/kafka/reconfigure-tls/kafka-rotate.yaml new file mode 100644 index 0000000000..db8d715861 --- /dev/null +++ b/docs/examples/kafka/reconfigure-tls/kafka-rotate.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-rotate + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: kafka-prod + tls: + rotateCertificates: true \ No newline at end of file diff --git a/docs/examples/kafka/reconfigure-tls/kafka-update-tls-issuer.yaml b/docs/examples/kafka/reconfigure-tls/kafka-update-tls-issuer.yaml new file mode 100644 index 0000000000..4e29dd7ab0 --- /dev/null +++ b/docs/examples/kafka/reconfigure-tls/kafka-update-tls-issuer.yaml @@ -0,0 +1,14 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-update-issuer + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: kafka-prod + tls: + issuerRef: + name: kf-new-issuer + kind: Issuer + apiGroup: "cert-manager.io" \ No newline at end of file diff --git a/docs/examples/kafka/reconfigure-tls/kafka.yaml b/docs/examples/kafka/reconfigure-tls/kafka.yaml new file mode 100644 index 0000000000..e8112984dc --- /dev/null +++ b/docs/examples/kafka/reconfigure-tls/kafka.yaml @@ -0,0 +1,28 @@ +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-prod + namespace: demo +spec: + version: 3.6.1 + topology: + broker: + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + controller: + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut \ No newline at end of file diff --git a/docs/examples/kafka/reconfigure/kafka-combined-custom-config.yaml b/docs/examples/kafka/reconfigure/kafka-combined-custom-config.yaml new file mode 100644 index 0000000000..18f8cf53df --- /dev/null +++ b/docs/examples/kafka/reconfigure/kafka-combined-custom-config.yaml @@ -0,0 +1,8 @@ +apiVersion: v1 +kind: Secret +metadata: + name: kf-combined-custom-config + namespace: demo +stringData: + server.properties: |- + log.retention.hours=100 \ No newline at end of file diff --git a/docs/examples/kafka/reconfigure/kafka-combined.yaml b/docs/examples/kafka/reconfigure/kafka-combined.yaml new file mode 100644 index 0000000000..9f5fcbe740 --- /dev/null +++ b/docs/examples/kafka/reconfigure/kafka-combined.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-dev + namespace: demo +spec: + replicas: 2 + version: 3.6.1 + configSecret: + name: kf-combined-custom-config + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut \ No newline at end of file diff --git a/docs/examples/kafka/reconfigure/kafka-reconfigure-apply-combined.yaml b/docs/examples/kafka/reconfigure/kafka-reconfigure-apply-combined.yaml new file mode 100644 index 0000000000..c945a4d15d --- /dev/null +++ b/docs/examples/kafka/reconfigure/kafka-reconfigure-apply-combined.yaml @@ -0,0 +1,15 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-reconfigure-apply-combined + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: kafka-dev + configuration: + applyConfig: + server.properties: |- + log.retention.hours=150 + timeout: 5m + apply: IfReady \ No newline at end of file diff --git a/docs/examples/kafka/reconfigure/kafka-reconfigure-apply-topology.yaml b/docs/examples/kafka/reconfigure/kafka-reconfigure-apply-topology.yaml new file mode 100644 index 0000000000..162b149bfb --- /dev/null +++ b/docs/examples/kafka/reconfigure/kafka-reconfigure-apply-topology.yaml @@ -0,0 +1,18 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-reconfigure-apply-topology + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: kafka-prod + configuration: + applyConfig: + broker.properties: |- + log.retention.hours=150 + controller.properties: |- + controller.quorum.election.timeout.ms=4000 + controller.quorum.fetch.timeout.ms=5000 + timeout: 5m + apply: IfReady \ No newline at end of file diff --git a/docs/examples/kafka/reconfigure/kafka-reconfigure-update-combined.yaml b/docs/examples/kafka/reconfigure/kafka-reconfigure-update-combined.yaml new file mode 100644 index 0000000000..9382a2b025 --- /dev/null +++ b/docs/examples/kafka/reconfigure/kafka-reconfigure-update-combined.yaml @@ -0,0 +1,14 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-reconfigure-combined + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: kafka-dev + configuration: + configSecret: + name: new-kf-combined-custom-config + timeout: 5m + apply: IfReady \ No newline at end of file diff --git a/docs/examples/kafka/reconfigure/kafka-reconfigure-update-topology.yaml b/docs/examples/kafka/reconfigure/kafka-reconfigure-update-topology.yaml new file mode 100644 index 0000000000..f4b9f5cc0d --- /dev/null +++ b/docs/examples/kafka/reconfigure/kafka-reconfigure-update-topology.yaml @@ -0,0 +1,14 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-reconfigure-topology + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: kafka-prod + configuration: + configSecret: + name: new-kf-topology-custom-config + timeout: 5m + apply: IfReady \ No newline at end of file diff --git a/docs/examples/kafka/reconfigure/kafka-topology-custom-config.yaml b/docs/examples/kafka/reconfigure/kafka-topology-custom-config.yaml new file mode 100644 index 0000000000..a113be5ae3 --- /dev/null +++ b/docs/examples/kafka/reconfigure/kafka-topology-custom-config.yaml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: Secret +metadata: + name: kf-topology-custom-config + namespace: demo +stringData: + broker.properties: |- + log.retention.hours=100 + controller.properties: |- + controller.quorum.election.timeout.ms=2000 \ No newline at end of file diff --git a/docs/examples/kafka/reconfigure/kafka-topology.yaml b/docs/examples/kafka/reconfigure/kafka-topology.yaml new file mode 100644 index 0000000000..20488615a8 --- /dev/null +++ b/docs/examples/kafka/reconfigure/kafka-topology.yaml @@ -0,0 +1,30 @@ +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-prod + namespace: demo +spec: + version: 3.6.1 + configSecret: + name: kf-topology-custom-config + topology: + broker: + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + controller: + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut \ No newline at end of file diff --git a/docs/examples/kafka/reconfigure/new-kafka-combined-custom-config.yaml b/docs/examples/kafka/reconfigure/new-kafka-combined-custom-config.yaml new file mode 100644 index 0000000000..b7daa9beb4 --- /dev/null +++ b/docs/examples/kafka/reconfigure/new-kafka-combined-custom-config.yaml @@ -0,0 +1,8 @@ +apiVersion: v1 +kind: Secret +metadata: + name: new-kf-combined-custom-config + namespace: demo +stringData: + server.properties: |- + log.retention.hours=125 \ No newline at end of file diff --git a/docs/examples/kafka/reconfigure/new-kafka-topology-custom-config.yaml b/docs/examples/kafka/reconfigure/new-kafka-topology-custom-config.yaml new file mode 100644 index 0000000000..3bf34a3ded --- /dev/null +++ b/docs/examples/kafka/reconfigure/new-kafka-topology-custom-config.yaml @@ -0,0 +1,8 @@ +apiVersion: v1 +kind: Secret +metadata: + name: new-kf-topology-custom-config + namespace: demo +stringData: + broker.properties: |- + log.retention.hours=125 \ No newline at end of file diff --git a/docs/examples/kafka/restart/kafka.yaml b/docs/examples/kafka/restart/kafka.yaml new file mode 100644 index 0000000000..b395dbecc3 --- /dev/null +++ b/docs/examples/kafka/restart/kafka.yaml @@ -0,0 +1,44 @@ +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-prod + namespace: demo +spec: + version: 3.6.1 + topology: + broker: + replicas: 2 + podTemplate: + spec: + containers: + - name: kafka + resources: + requests: + cpu: "500m" + memory: "1Gi" + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + controller: + replicas: 2 + podTemplate: + spec: + containers: + - name: kafka + resources: + requests: + cpu: "500m" + memory: "1Gi" + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: DoNotTerminate \ No newline at end of file diff --git a/docs/examples/kafka/restart/ops.yaml b/docs/examples/kafka/restart/ops.yaml new file mode 100644 index 0000000000..8772b0f77e --- /dev/null +++ b/docs/examples/kafka/restart/ops.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: restart + namespace: demo +spec: + type: Restart + databaseRef: + name: kafka-prod + timeout: 5m + apply: Always \ No newline at end of file diff --git a/docs/examples/kafka/restproxy/restproxy-quickstart.yaml b/docs/examples/kafka/restproxy/restproxy-quickstart.yaml new file mode 100644 index 0000000000..ab85356e24 --- /dev/null +++ b/docs/examples/kafka/restproxy/restproxy-quickstart.yaml @@ -0,0 +1,12 @@ +apiVersion: kafka.kubedb.com/v1alpha1 +kind: RestProxy +metadata: + name: restproxy-quickstart + namespace: demo +spec: + version: 3.15.0 + replicas: 2 + kafkaRef: + name: kafka-quickstart + namespace: demo + deletionPolicy: WipeOut \ No newline at end of file diff --git a/docs/examples/kafka/scaling/horizontal-scaling/kafka-hscale-down-combined.yaml b/docs/examples/kafka/scaling/horizontal-scaling/kafka-hscale-down-combined.yaml new file mode 100644 index 0000000000..0cb298d523 --- /dev/null +++ b/docs/examples/kafka/scaling/horizontal-scaling/kafka-hscale-down-combined.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-hscale-down-combined + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: kafka-dev + horizontalScaling: + node: 2 \ No newline at end of file diff --git a/docs/examples/kafka/scaling/horizontal-scaling/kafka-hscale-down-topology.yaml b/docs/examples/kafka/scaling/horizontal-scaling/kafka-hscale-down-topology.yaml new file mode 100644 index 0000000000..0706afe78c --- /dev/null +++ b/docs/examples/kafka/scaling/horizontal-scaling/kafka-hscale-down-topology.yaml @@ -0,0 +1,13 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-hscale-down-topology + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: kafka-prod + horizontalScaling: + topology: + broker: 2 + controller: 2 \ No newline at end of file diff --git a/docs/examples/kafka/scaling/horizontal-scaling/kafka-hscale-up-combined.yaml b/docs/examples/kafka/scaling/horizontal-scaling/kafka-hscale-up-combined.yaml new file mode 100644 index 0000000000..e302cbd2fe --- /dev/null +++ b/docs/examples/kafka/scaling/horizontal-scaling/kafka-hscale-up-combined.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-hscale-up-combined + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: kafka-dev + horizontalScaling: + node: 3 \ No newline at end of file diff --git a/docs/examples/kafka/scaling/horizontal-scaling/kafka-hscale-up-topology.yaml b/docs/examples/kafka/scaling/horizontal-scaling/kafka-hscale-up-topology.yaml new file mode 100644 index 0000000000..0a71039967 --- /dev/null +++ b/docs/examples/kafka/scaling/horizontal-scaling/kafka-hscale-up-topology.yaml @@ -0,0 +1,13 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-hscale-up-topology + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: kafka-prod + horizontalScaling: + topology: + broker: 3 + controller: 3 \ No newline at end of file diff --git a/docs/examples/kafka/scaling/kafka-combined.yaml b/docs/examples/kafka/scaling/kafka-combined.yaml new file mode 100644 index 0000000000..f401c1440e --- /dev/null +++ b/docs/examples/kafka/scaling/kafka-combined.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-dev + namespace: demo +spec: + replicas: 2 + version: 3.6.1 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut \ No newline at end of file diff --git a/docs/examples/kafka/scaling/kafka-topology.yaml b/docs/examples/kafka/scaling/kafka-topology.yaml new file mode 100644 index 0000000000..e8112984dc --- /dev/null +++ b/docs/examples/kafka/scaling/kafka-topology.yaml @@ -0,0 +1,28 @@ +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-prod + namespace: demo +spec: + version: 3.6.1 + topology: + broker: + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + controller: + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut \ No newline at end of file diff --git a/docs/examples/kafka/scaling/vertical-scaling/kafka-vertical-scaling-combined.yaml b/docs/examples/kafka/scaling/vertical-scaling/kafka-vertical-scaling-combined.yaml new file mode 100644 index 0000000000..38d51a3376 --- /dev/null +++ b/docs/examples/kafka/scaling/vertical-scaling/kafka-vertical-scaling-combined.yaml @@ -0,0 +1,20 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-vscale-combined + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: kafka-dev + verticalScaling: + node: + resources: + requests: + memory: "1.2Gi" + cpu: "0.6" + limits: + memory: "1.2Gi" + cpu: "0.6" + timeout: 5m + apply: IfReady \ No newline at end of file diff --git a/docs/examples/kafka/scaling/vertical-scaling/kafka-vertical-scaling-topology.yaml b/docs/examples/kafka/scaling/vertical-scaling/kafka-vertical-scaling-topology.yaml new file mode 100644 index 0000000000..3b890be76d --- /dev/null +++ b/docs/examples/kafka/scaling/vertical-scaling/kafka-vertical-scaling-topology.yaml @@ -0,0 +1,28 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-vscale-topology + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: kafka-prod + verticalScaling: + broker: + resources: + requests: + memory: "1.2Gi" + cpu: "0.6" + limits: + memory: "1.2Gi" + cpu: "0.6" + controller: + resources: + requests: + memory: "1.1Gi" + cpu: "0.6" + limits: + memory: "1.1Gi" + cpu: "0.6" + timeout: 5m + apply: IfReady \ No newline at end of file diff --git a/docs/examples/kafka/schemaregistry/schemaregistry-apicurio.yaml b/docs/examples/kafka/schemaregistry/schemaregistry-apicurio.yaml new file mode 100644 index 0000000000..875433aaaf --- /dev/null +++ b/docs/examples/kafka/schemaregistry/schemaregistry-apicurio.yaml @@ -0,0 +1,12 @@ +apiVersion: kafka.kubedb.com/v1alpha1 +kind: SchemaRegistry +metadata: + name: schemaregistry-quickstart + namespace: demo +spec: + version: 2.5.11.final + replicas: 2 + kafkaRef: + name: kafka-quickstart + namespace: demo + deletionPolicy: WipeOut \ No newline at end of file diff --git a/docs/examples/kafka/tls/connectcluster-issuer.yaml b/docs/examples/kafka/tls/connectcluster-issuer.yaml new file mode 100644 index 0000000000..a8777926f2 --- /dev/null +++ b/docs/examples/kafka/tls/connectcluster-issuer.yaml @@ -0,0 +1,8 @@ +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: connectcluster-ca-issuer + namespace: demo +spec: + ca: + secretName: connectcluster-ca \ No newline at end of file diff --git a/docs/examples/kafka/tls/connectcluster-tls.yaml b/docs/examples/kafka/tls/connectcluster-tls.yaml new file mode 100644 index 0000000000..5ac77544d5 --- /dev/null +++ b/docs/examples/kafka/tls/connectcluster-tls.yaml @@ -0,0 +1,21 @@ +apiVersion: kafka.kubedb.com/v1alpha1 +kind: ConnectCluster +metadata: + name: connectcluster-tls + namespace: demo +spec: + version: 3.6.1 + enableSSL: true + tls: + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: connectcluster-ca-issuer + replicas: 3 + connectorPlugins: + - postgres-2.4.2.final + - jdbc-2.6.1.final + kafkaRef: + name: kafka-prod + namespace: demo + deletionPolicy: WipeOut \ No newline at end of file diff --git a/docs/examples/kafka/tls/kafka-dev-tls.yaml b/docs/examples/kafka/tls/kafka-dev-tls.yaml new file mode 100644 index 0000000000..c3c163b83a --- /dev/null +++ b/docs/examples/kafka/tls/kafka-dev-tls.yaml @@ -0,0 +1,23 @@ +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-dev-tls + namespace: demo +spec: + version: 3.6.1 + enableSSL: true + tls: + issuerRef: + apiGroup: "cert-manager.io" + kind: Issuer + name: kafka-ca-issuer + replicas: 3 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut \ No newline at end of file diff --git a/docs/examples/kafka/tls/kafka-prod-tls.yaml b/docs/examples/kafka/tls/kafka-prod-tls.yaml new file mode 100644 index 0000000000..f939caa1d3 --- /dev/null +++ b/docs/examples/kafka/tls/kafka-prod-tls.yaml @@ -0,0 +1,34 @@ +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-prod-tls + namespace: demo +spec: + version: 3.6.1 + enableSSL: true + tls: + issuerRef: + apiGroup: "cert-manager.io" + kind: Issuer + name: kafka-ca-issuer + topology: + broker: + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + controller: + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut \ No newline at end of file diff --git a/docs/examples/kafka/update-version/kafka.yaml b/docs/examples/kafka/update-version/kafka.yaml new file mode 100644 index 0000000000..6e9fd84e63 --- /dev/null +++ b/docs/examples/kafka/update-version/kafka.yaml @@ -0,0 +1,44 @@ +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-prod + namespace: demo +spec: + version: 3.5.2 + topology: + broker: + replicas: 2 + podTemplate: + spec: + containers: + - name: kafka + resources: + requests: + cpu: "500m" + memory: "1Gi" + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + controller: + replicas: 2 + podTemplate: + spec: + containers: + - name: kafka + resources: + requests: + cpu: "500m" + memory: "1Gi" + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut \ No newline at end of file diff --git a/docs/examples/kafka/update-version/update-version-ops.yaml b/docs/examples/kafka/update-version/update-version-ops.yaml new file mode 100644 index 0000000000..5fd4bf4ecb --- /dev/null +++ b/docs/examples/kafka/update-version/update-version-ops.yaml @@ -0,0 +1,13 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kafka-update-version + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: kafka-prod + updateVersion: + targetVersion: 3.6.1 + timeout: 5m + apply: IfReady \ No newline at end of file diff --git a/docs/examples/kafka/volume-expansion/kafka-combined.yaml b/docs/examples/kafka/volume-expansion/kafka-combined.yaml new file mode 100644 index 0000000000..f401c1440e --- /dev/null +++ b/docs/examples/kafka/volume-expansion/kafka-combined.yaml @@ -0,0 +1,17 @@ +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-dev + namespace: demo +spec: + replicas: 2 + version: 3.6.1 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut \ No newline at end of file diff --git a/docs/examples/kafka/volume-expansion/kafka-topology.yaml b/docs/examples/kafka/volume-expansion/kafka-topology.yaml new file mode 100644 index 0000000000..e8112984dc --- /dev/null +++ b/docs/examples/kafka/volume-expansion/kafka-topology.yaml @@ -0,0 +1,28 @@ +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-prod + namespace: demo +spec: + version: 3.6.1 + topology: + broker: + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + controller: + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut \ No newline at end of file diff --git a/docs/examples/kafka/volume-expansion/kafka-volume-expansion-combined.yaml b/docs/examples/kafka/volume-expansion/kafka-volume-expansion-combined.yaml new file mode 100644 index 0000000000..ac4bff75e7 --- /dev/null +++ b/docs/examples/kafka/volume-expansion/kafka-volume-expansion-combined.yaml @@ -0,0 +1,12 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kf-volume-exp-combined + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: kafka-dev + volumeExpansion: + node: 2Gi + mode: Online \ No newline at end of file diff --git a/docs/examples/kafka/volume-expansion/kafka-volume-expansion-topology.yaml b/docs/examples/kafka/volume-expansion/kafka-volume-expansion-topology.yaml new file mode 100644 index 0000000000..95cc632a66 --- /dev/null +++ b/docs/examples/kafka/volume-expansion/kafka-volume-expansion-topology.yaml @@ -0,0 +1,13 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kf-volume-exp-topology + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: kafka-prod + volumeExpansion: + broker: 3Gi + controller: 2Gi + mode: Online \ No newline at end of file diff --git a/docs/guides/kafka/README.md b/docs/guides/kafka/README.md index aaf932b36a..0038f83e05 100644 --- a/docs/guides/kafka/README.md +++ b/docs/guides/kafka/README.md @@ -80,8 +80,8 @@ KubeDB supports The following Kafka versions. Supported version are applicable f ## User Guide -- [Quickstart Kafka](/docs/guides/kafka/quickstart/overview/kafka/index.md) with KubeDB Operator. -- [Quickstart ConnectCluster](/docs/guides/kafka/quickstart/overview/connectcluster/index.md) with KubeDB Operator. +- [Quickstart Kafka](/docs/guides/kafka/quickstart/kafka/index.md) with KubeDB Operator. +- [Quickstart ConnectCluster](/docs/guides/kafka/connectcluster/overview.md) with KubeDB Operator. - Kafka Clustering supported by KubeDB - [Combined Clustering](/docs/guides/kafka/clustering/combined-cluster/index.md) - [Topology Clustering](/docs/guides/kafka/clustering/topology-cluster/index.md) diff --git a/docs/guides/kafka/autoscaler/_index.md b/docs/guides/kafka/autoscaler/_index.md new file mode 100644 index 0000000000..22a1e3830d --- /dev/null +++ b/docs/guides/kafka/autoscaler/_index.md @@ -0,0 +1,10 @@ +--- +title: Autoscaling +menu: + docs_{{ .version }}: + identifier: kf-auto-scaling + name: Autoscaling + parent: kf-kafka-guides + weight: 46 +menu_name: docs_{{ .version }} +--- \ No newline at end of file diff --git a/docs/guides/kafka/autoscaler/compute/_index.md b/docs/guides/kafka/autoscaler/compute/_index.md new file mode 100644 index 0000000000..78729bab87 --- /dev/null +++ b/docs/guides/kafka/autoscaler/compute/_index.md @@ -0,0 +1,10 @@ +--- +title: Compute Autoscaling +menu: + docs_{{ .version }}: + identifier: kf-compute-auto-scaling + name: Compute Autoscaling + parent: kf-auto-scaling + weight: 46 +menu_name: docs_{{ .version }} +--- diff --git a/docs/guides/kafka/autoscaler/compute/combined.md b/docs/guides/kafka/autoscaler/compute/combined.md new file mode 100644 index 0000000000..7822410960 --- /dev/null +++ b/docs/guides/kafka/autoscaler/compute/combined.md @@ -0,0 +1,469 @@ +--- +title: Kafka Combined Autoscaling +menu: + docs_{{ .version }}: + identifier: kf-auto-scaling-combined + name: Combined Cluster + parent: kf-compute-auto-scaling + weight: 20 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Autoscaling the Compute Resource of a Kafka Combined Cluster + +This guide will show you how to use `KubeDB` to autoscale compute resources i.e. cpu and memory of a Kafka combined cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Provisioner, Ops-manager and Autoscaler operator in your cluster following the steps [here](/docs/setup/README.md). + +- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation) + +- You should be familiar with the following `KubeDB` concepts: + - [Kafka](/docs/guides/kafka/concepts/kafka.md) + - [KafkaAutoscaler](/docs/guides/kafka/concepts/kafkaautoscaler.md) + - [KafkaOpsRequest](/docs/guides/kafka/concepts/kafkaopsrequest.md) + - [Compute Resource Autoscaling Overview](/docs/guides/kafka/autoscaler/compute/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/kafka](/docs/examples/kafka) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Autoscaling of Combined Cluster + +Here, we are going to deploy a `Kafka` Combined Cluster using a supported version by `KubeDB` operator. Then we are going to apply `KafkaAutoscaler` to set up autoscaling. + +#### Deploy Kafka Combined Cluster + +In this section, we are going to deploy a Kafka Topology database with version `3.6.1`. Then, in the next section we will set up autoscaling for this database using `KafkaAutoscaler` CRD. Below is the YAML of the `Kafka` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-dev + namespace: demo +spec: + replicas: 2 + version: 3.6.1 + podTemplate: + spec: + containers: + - name: kafka + resources: + limits: + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut +``` + +Let's create the `Kafka` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/autoscaler/kafka-combined.yaml +kafka.kubedb.com/kafka-dev created +``` + +Now, wait until `kafka-dev` has status `Ready`. i.e, + +```bash +$ kubectl get kf -n demo -w +NAME TYPE VERSION STATUS AGE +kafka-dev kubedb.com/v1 3.6.1 Provisioning 0s +kafka-dev kubedb.com/v1 3.6.1 Provisioning 24s +. +. +kafka-dev kubedb.com/v1 3.6.1 Ready 92s +``` + +Let's check the Pod containers resources, + +```bash +$ kubectl get pod -n demo kafka-dev-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "memory": "1Gi" + }, + "requests": { + "cpu": "500m", + "memory": "1Gi" + } +} +``` + +Let's check the Kafka resources, +```bash +$ kubectl get kafka -n demo kafka-dev -o json | jq '.spec.podTemplate.spec.containers[].resources' +{ + "limits": { + "memory": "1Gi" + }, + "requests": { + "cpu": "500m", + "memory": "1Gi" + } +} +``` + +You can see from the above outputs that the resources are same as the one we have assigned while deploying the kafka. + +We are now ready to apply the `KafkaAutoscaler` CRO to set up autoscaling for this database. + +### Compute Resource Autoscaling + +Here, we are going to set up compute resource autoscaling using a KafkaAutoscaler Object. + +#### Create KafkaAutoscaler Object + +In order to set up compute resource autoscaling for this combined cluster, we have to create a `KafkaAutoscaler` CRO with our desired configuration. Below is the YAML of the `KafkaAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: KafkaAutoscaler +metadata: + name: kf-combined-autoscaler + namespace: demo +spec: + databaseRef: + name: kafka-dev + opsRequestOptions: + timeout: 5m + apply: IfReady + compute: + node: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 20 + minAllowed: + cpu: 600m + memory: 1.5Gi + maxAllowed: + cpu: 1 + memory: 2Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing compute resource scaling operation on `kafka-dev` cluster. +- `spec.compute.node.trigger` specifies that compute autoscaling is enabled for this cluster. +- `spec.compute.node.podLifeTimeThreshold` specifies the minimum lifetime for at least one of the pod to initiate a vertical scaling. +- `spec.compute.node.resourceDiffPercentage` specifies the minimum resource difference in percentage. The default is 10%. + If the difference between current & recommended resource is less than ResourceDiffPercentage, Autoscaler Operator will ignore the updating. +- `spec.compute.node.minAllowed` specifies the minimum allowed resources for the cluster. +- `spec.compute.node.maxAllowed` specifies the maximum allowed resources for the cluster. +- `spec.compute.node.controlledResources` specifies the resources that are controlled by the autoscaler. +- `spec.compute.node.containerControlledValues` specifies which resource values should be controlled. The default is "RequestsAndLimits". +- `spec.opsRequestOptions` contains the options to pass to the created OpsRequest. It has 2 fields. + - `timeout` specifies the timeout for the OpsRequest. + - `apply` specifies when the OpsRequest should be applied. The default is "IfReady". + +Let's create the `KafkaAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/autoscaler/compute/kafka-combined-autoscaler.yaml +kafkaautoscaler.autoscaling.kubedb.com/kf-combined-autoscaler created +``` + +#### Verify Autoscaling is set up successfully + +Let's check that the `kafkaautoscaler` resource is created successfully, + +```bash +$ kubectl describe kafkaautoscaler kf-combined-autoscaler -n demo +Name: kf-combined-autoscaler +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.kubedb.com/v1alpha1 +Kind: KafkaAutoscaler +Metadata: + Creation Timestamp: 2024-08-27T05:55:51Z + Generation: 1 + Owner References: + API Version: kubedb.com/v1 + Block Owner Deletion: true + Controller: true + Kind: Kafka + Name: kafka-dev + UID: a0153c7f-1e1e-4070-a318-c7c1153b810a + Resource Version: 1104655 + UID: 817602cc-f851-4fc5-b2c1-1d191462ac56 +Spec: + Compute: + Node: + Container Controlled Values: RequestsAndLimits + Controlled Resources: + cpu + memory + Max Allowed: + Cpu: 1 + Memory: 2Gi + Min Allowed: + Cpu: 600m + Memory: 1536Mi + Pod Life Time Threshold: 5m0s + Resource Diff Percentage: 20 + Trigger: On + Database Ref: + Name: kafka-dev + Ops Request Options: + Apply: IfReady + Timeout: 5m0s +Status: + Checkpoints: + Cpu Histogram: + Bucket Weights: + Index: 0 + Weight: 4610 + Index: 1 + Weight: 10000 + Reference Timestamp: 2024-08-27T05:55:00Z + Total Weight: 0.35081120875606336 + First Sample Start: 2024-08-27T05:55:44Z + Last Sample Start: 2024-08-27T05:56:49Z + Last Update Time: 2024-08-27T05:57:10Z + Memory Histogram: + Reference Timestamp: 2024-08-27T06:00:00Z + Ref: + Container Name: kafka + Vpa Object Name: kafka-dev + Total Samples Count: 3 + Version: v3 + Conditions: + Last Transition Time: 2024-08-27T05:56:32Z + Message: Successfully created kafkaOpsRequest demo/kfops-kafka-dev-z8d3l5 + Observed Generation: 1 + Reason: CreateOpsRequest + Status: True + Type: CreateOpsRequest + Vpas: + Conditions: + Last Transition Time: 2024-08-27T05:56:10Z + Status: True + Type: RecommendationProvided + Recommendation: + Container Recommendations: + Container Name: kafka + Lower Bound: + Cpu: 600m + Memory: 1536Mi + Target: + Cpu: 600m + Memory: 1536Mi + Uncapped Target: + Cpu: 100m + Memory: 511772986 + Upper Bound: + Cpu: 1 + Memory: 2Gi + Vpa Name: kafka-dev +Events: +``` +So, the `kafkaautoscaler` resource is created successfully. + +you can see in the `Status.VPAs.Recommendation` section, that recommendation has been generated for our database. Our autoscaler operator continuously watches the recommendation generated and creates an `kafkaopsrequest` based on the recommendations, if the database pods resources are needed to scaled up or down. + +Let's watch the `kafkaopsrequest` in the demo namespace to see if any `kafkaopsrequest` object is created. After some time you'll see that a `kafkaopsrequest` will be created based on the recommendation. + +```bash +$ watch kubectl get kafkaopsrequest -n demo +Every 2.0s: kubectl get kafkaopsrequest -n demo +NAME TYPE STATUS AGE +kfops-kafka-dev-z8d3l5 VerticalScaling Progressing 10s +``` + +Let's wait for the ops request to become successful. + +```bash +$ kubectl get kafkaopsrequest -n demo +NAME TYPE STATUS AGE +kfops-kafka-dev-z8d3l5 VerticalScaling Successful 3m2s +``` + +We can see from the above output that the `KafkaOpsRequest` has succeeded. If we describe the `KafkaOpsRequest` we will get an overview of the steps that were followed to scale the cluster. + +```bash +$ kubectl describe kafkaopsrequests -n demo kfops-kafka-dev-z8d3l5 +Name: kfops-kafka-dev-z8d3l5 +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=kafka-dev + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=kafkas.kubedb.com +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: KafkaOpsRequest +Metadata: + Creation Timestamp: 2024-08-27T05:56:32Z + Generation: 1 + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: KafkaAutoscaler + Name: kf-combined-autoscaler + UID: 817602cc-f851-4fc5-b2c1-1d191462ac56 + Resource Version: 1104871 + UID: 8b7615c6-d38b-4d5a-b733-6aa93cd41a29 +Spec: + Apply: IfReady + Database Ref: + Name: kafka-dev + Timeout: 5m0s + Type: VerticalScaling + Vertical Scaling: + Node: + Resources: + Limits: + Memory: 1536Mi + Requests: + Cpu: 600m + Memory: 1536Mi +Status: + Conditions: + Last Transition Time: 2024-08-27T05:56:32Z + Message: Kafka ops-request has started to vertically scaling the kafka nodes + Observed Generation: 1 + Reason: VerticalScaling + Status: True + Type: VerticalScaling + Last Transition Time: 2024-08-27T05:56:35Z + Message: Successfully updated PetSets Resources + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2024-08-27T05:56:40Z + Message: get pod; ConditionStatus:True; PodName:kafka-dev-0 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-dev-0 + Last Transition Time: 2024-08-27T05:56:40Z + Message: evict pod; ConditionStatus:True; PodName:kafka-dev-0 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-dev-0 + Last Transition Time: 2024-08-27T05:57:10Z + Message: check pod running; ConditionStatus:True; PodName:kafka-dev-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-dev-0 + Last Transition Time: 2024-08-27T05:57:15Z + Message: get pod; ConditionStatus:True; PodName:kafka-dev-1 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-dev-1 + Last Transition Time: 2024-08-27T05:57:16Z + Message: evict pod; ConditionStatus:True; PodName:kafka-dev-1 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-dev-1 + Last Transition Time: 2024-08-27T05:57:25Z + Message: check pod running; ConditionStatus:True; PodName:kafka-dev-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-dev-1 + Last Transition Time: 2024-08-27T05:57:30Z + Message: Successfully Restarted Pods With Resources + Observed Generation: 1 + Reason: RestartPods + Status: True + Type: RestartPods + Last Transition Time: 2024-08-27T05:57:30Z + Message: Successfully completed the vertical scaling for kafka + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 4m33s KubeDB Ops-manager Operator Start processing for KafkaOpsRequest: demo/kfops-kafka-dev-z8d3l5 + Normal Starting 4m33s KubeDB Ops-manager Operator Pausing Kafka databse: demo/kafka-dev + Normal Successful 4m33s KubeDB Ops-manager Operator Successfully paused Kafka database: demo/kafka-dev for KafkaOpsRequest: kfops-kafka-dev-z8d3l5 + Normal UpdatePetSets 4m30s KubeDB Ops-manager Operator Successfully updated PetSets Resources + Warning get pod; ConditionStatus:True; PodName:kafka-dev-0 4m25s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-dev-0 + Warning evict pod; ConditionStatus:True; PodName:kafka-dev-0 4m25s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-dev-0 + Warning check pod running; ConditionStatus:False; PodName:kafka-dev-0 4m19s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-dev-0 + Warning check pod running; ConditionStatus:True; PodName:kafka-dev-0 3m55s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-dev-0 + Warning get pod; ConditionStatus:True; PodName:kafka-dev-1 3m50s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-dev-1 + Warning evict pod; ConditionStatus:True; PodName:kafka-dev-1 3m49s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-dev-1 + Warning check pod running; ConditionStatus:False; PodName:kafka-dev-1 3m45s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-dev-1 + Warning check pod running; ConditionStatus:True; PodName:kafka-dev-1 3m40s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-dev-1 + Normal RestartPods 3m35s KubeDB Ops-manager Operator Successfully Restarted Pods With Resources + Normal Starting 3m35s KubeDB Ops-manager Operator Resuming Kafka database: demo/kafka-dev + Normal Successful 3m35s KubeDB Ops-manager Operator Successfully resumed Kafka database: demo/kafka-dev for KafkaOpsRequest: kfops-kafka-dev-z8d3l5 +``` + +Now, we are going to verify from the Pod, and the Kafka yaml whether the resources of the topology database has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo kafka-dev-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "memory": "1536Mi" + }, + "requests": { + "cpu": "600m", + "memory": "1536Mi" + } +} + + +$ kubectl get kafka -n demo kafka-dev -o json | jq '.spec.podTemplate.spec.containers[].resources' +{ + "limits": { + "memory": "1536Mi" + }, + "requests": { + "cpu": "600m", + "memory": "1536Mi" + } +} +``` + + +The above output verifies that we have successfully auto scaled the resources of the Kafka combined cluster. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete kafkaopsrequest -n demo kfops-kafka-dev-z8d3l5 +kubectl delete kafkaautoscaler -n demo kf-combined-autoscaler +kubectl delete kf -n demo kafka-dev +kubectl delete ns demo +``` +## Next Steps + +- Detail concepts of [Kafka object](/docs/guides/kafka/concepts/kafka.md). +- Different Kafka topology clustering modes [here](/docs/guides/kafka/clustering/_index.md). +- Monitor your Kafka database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). + +[//]: # (- Monitor your Kafka database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/kafka/monitoring/using-builtin-prometheus.md).) +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/kafka/autoscaler/compute/overview.md b/docs/guides/kafka/autoscaler/compute/overview.md new file mode 100644 index 0000000000..d98826da46 --- /dev/null +++ b/docs/guides/kafka/autoscaler/compute/overview.md @@ -0,0 +1,55 @@ +--- +title: Kafka Compute Autoscaling Overview +menu: + docs_{{ .version }}: + identifier: kf-auto-scaling-overview + name: Overview + parent: kf-compute-auto-scaling + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Kafka Compute Resource Autoscaling + +This guide will give an overview on how KubeDB Autoscaler operator autoscales the database compute resources i.e. cpu and memory using `kafkaautoscaler` crd. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [Kafka](/docs/guides/kafka/concepts/kafka.md) + - [KafkaAutoscaler](/docs/guides/kafka/concepts/kafkaautoscaler.md) + - [KafkaOpsRequest](/docs/guides/kafka/concepts/kafkaopsrequest.md) + +## How Compute Autoscaling Works + +The following diagram shows how KubeDB Autoscaler operator autoscales the resources of `Kafka` database components. Open the image in a new tab to see the enlarged version. + +
+  Compute Auto Scaling process of Kafka +
Fig: Compute Auto Scaling process of Kafka
+
+ +The Auto Scaling process consists of the following steps: + +1. At first, a user creates a `Kafka` Custom Resource Object (CRO). + +2. `KubeDB` Provisioner operator watches the `Kafka` CRO. + +3. When the operator finds a `Kafka` CRO, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to set up autoscaling of the various components (ie. Combined, Broker, Controller) of the `Kafka` cluster the user creates a `KafkaAutoscaler` CRO with desired configuration. + +5. `KubeDB` Autoscaler operator watches the `KafkaAutoscaler` CRO. + +6. `KubeDB` Autoscaler operator generates recommendation using the modified version of kubernetes [official recommender](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler/pkg/recommender) for different components of the database, as specified in the `KafkaAutoscaler` CRO. + +7. If the generated recommendation doesn't match the current resources of the database, then `KubeDB` Autoscaler operator creates a `KafkaOpsRequest` CRO to scale the database to match the recommendation generated. + +8. `KubeDB` Ops-manager operator watches the `KafkaOpsRequest` CRO. + +9. Then the `KubeDB` Ops-manager operator will scale the database component vertically as specified on the `KafkaOpsRequest` CRO. + +In the next docs, we are going to show a step by step guide on Autoscaling of various Kafka database components using `KafkaAutoscaler` CRD. diff --git a/docs/guides/kafka/autoscaler/compute/topology.md b/docs/guides/kafka/autoscaler/compute/topology.md new file mode 100644 index 0000000000..b2e1d35f4b --- /dev/null +++ b/docs/guides/kafka/autoscaler/compute/topology.md @@ -0,0 +1,852 @@ +--- +title: Kafka Topology Autoscaling +menu: + docs_{{ .version }}: + identifier: kf-auto-scaling-topology + name: Topology Cluster + parent: kf-compute-auto-scaling + weight: 20 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Autoscaling the Compute Resource of a Kafka Topology Cluster + +This guide will show you how to use `KubeDB` to autoscale compute resources i.e. cpu and memory of a Kafka topology cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Provisioner, Ops-manager and Autoscaler operator in your cluster following the steps [here](/docs/setup/README.md). + +- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation) + +- You should be familiar with the following `KubeDB` concepts: + - [Kafka](/docs/guides/kafka/concepts/kafka.md) + - [KafkaAutoscaler](/docs/guides/kafka/concepts/kafkaautoscaler.md) + - [KafkaOpsRequest](/docs/guides/kafka/concepts/kafkaopsrequest.md) + - [Compute Resource Autoscaling Overview](/docs/guides/kafka/autoscaler/compute/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/kafka](/docs/examples/kafka) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Autoscaling of Topology Cluster + +Here, we are going to deploy a `Kafka` Topology Cluster using a supported version by `KubeDB` operator. Then we are going to apply `KafkaAutoscaler` to set up autoscaling. + +#### Deploy Kafka Topology Cluster + +In this section, we are going to deploy a Kafka Topology cluster with version `3.6.1`. Then, in the next section we will set up autoscaling for this database using `KafkaAutoscaler` CRD. Below is the YAML of the `Kafka` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-prod + namespace: demo +spec: + version: 3.6.1 + topology: + broker: + replicas: 2 + podTemplate: + spec: + containers: + - name: kafka + resources: + limits: + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + controller: + replicas: 2 + podTemplate: + spec: + containers: + - name: kafka + resources: + limits: + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut +``` + +Let's create the `Kafka` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/autoscaler/kafka-topology.yaml +kafka.kubedb.com/kafka-prod created +``` + +Now, wait until `kafka-prod` has status `Ready`. i.e, + +```bash +$ kubectl get kf -n demo -w +NAME TYPE VERSION STATUS AGE +kafka-prod kubedb.com/v1 3.6.1 Provisioning 0s +kafka-prod kubedb.com/v1 3.6.1 Provisioning 24s +. +. +kafka-prod kubedb.com/v1 3.6.1 Ready 118s +``` + +## Kafka Topology Autoscaler(Broker) + +Let's check the Broker Pod containers resources, + +```bash +$ kubectl get pod -n demo kafka-prod-broker-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "memory": "1Gi" + }, + "requests": { + "cpu": "500m", + "memory": "1Gi" + } +} +``` + +Let's check the Kafka resources for broker, +```bash +$ kubectl get kafka -n demo kafka-prod -o json | jq '.spec.topology.broker.podTemplate.spec.containers[].resources' +{ + "limits": { + "memory": "1Gi" + }, + "requests": { + "cpu": "500m", + "memory": "1Gi" + } +} +``` + +You can see from the above outputs that the resources for broker are same as the one we have assigned while deploying the kafka. + +We are now ready to apply the `KafkaAutoscaler` CRO to set up autoscaling for this broker nodes. + +### Compute Resource Autoscaling + +Here, we are going to set up compute resource autoscaling using a KafkaAutoscaler Object. + +#### Create KafkaAutoscaler Object + +In order to set up compute resource autoscaling for this topology cluster, we have to create a `KafkaAutoscaler` CRO with our desired configuration. Below is the YAML of the `KafkaAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: KafkaAutoscaler +metadata: + name: kf-broker-autoscaler + namespace: demo +spec: + databaseRef: + name: kafka-prod + opsRequestOptions: + timeout: 5m + apply: IfReady + compute: + broker: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 20 + minAllowed: + cpu: 600m + memory: 1.5Gi + maxAllowed: + cpu: 1 + memory: 2Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing compute resource scaling operation on `kafka-prod` cluster. +- `spec.compute.broker.trigger` specifies that compute autoscaling is enabled for this node. +- `spec.compute.broker.podLifeTimeThreshold` specifies the minimum lifetime for at least one of the pod to initiate a vertical scaling. +- `spec.compute.broker.resourceDiffPercentage` specifies the minimum resource difference in percentage. The default is 10%. + If the difference between current & recommended resource is less than ResourceDiffPercentage, Autoscaler Operator will ignore the updating. +- `spec.compute.broker.minAllowed` specifies the minimum allowed resources for the cluster. +- `spec.compute.broker.maxAllowed` specifies the maximum allowed resources for the cluster. +- `spec.compute.broker.controlledResources` specifies the resources that are controlled by the autoscaler. +- `spec.compute.broker.containerControlledValues` specifies which resource values should be controlled. The default is "RequestsAndLimits". +- `spec.opsRequestOptions` contains the options to pass to the created OpsRequest. It has 2 fields. + - `timeout` specifies the timeout for the OpsRequest. + - `apply` specifies when the OpsRequest should be applied. The default is "IfReady". + +Let's create the `KafkaAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/autoscaler/compute/kafka-broker-autoscaler.yaml +kafkaautoscaler.autoscaling.kubedb.com/kf-broker-autoscaler created +``` + +#### Verify Autoscaling is set up successfully + +Let's check that the `kafkaautoscaler` resource is created successfully, + +```bash +$ kubectl describe kafkaautoscaler kf-broker-autoscaler -n demo +$ kubectl describe kafkaautoscaler kf-broker-autoscaler -n demo +Name: kf-broker-autoscaler +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.kubedb.com/v1alpha1 +Kind: KafkaAutoscaler +Metadata: + Creation Timestamp: 2024-08-27T06:17:07Z + Generation: 1 + Owner References: + API Version: kubedb.com/v1 + Block Owner Deletion: true + Controller: true + Kind: Kafka + Name: kafka-prod + UID: 7cee41e0-259c-4a5e-856a-e8ca90056120 + Resource Version: 1113275 + UID: 7e3be99f-cd4d-440a-a477-8e8994840ebb +Spec: + Compute: + Broker: + Container Controlled Values: RequestsAndLimits + Controlled Resources: + cpu + memory + Max Allowed: + Cpu: 1 + Memory: 2Gi + Min Allowed: + Cpu: 600m + Memory: 1536Mi + Pod Life Time Threshold: 5m0s + Resource Diff Percentage: 20 + Trigger: On + Database Ref: + Name: kafka-prod + Ops Request Options: + Apply: IfReady + Timeout: 5m0s +Status: + Checkpoints: + Cpu Histogram: + Bucket Weights: + Index: 1 + Weight: 10000 + Index: 2 + Weight: 2485 + Index: 30 + Weight: 1923 + Reference Timestamp: 2024-08-27T06:20:00Z + Total Weight: 0.8587070656303101 + First Sample Start: 2024-08-27T06:20:45Z + Last Sample Start: 2024-08-27T06:23:53Z + Last Update Time: 2024-08-27T06:24:10Z + Memory Histogram: + Bucket Weights: + Index: 20 + Weight: 9682 + Index: 21 + Weight: 10000 + Reference Timestamp: 2024-08-27T06:25:00Z + Total Weight: 1.9636285054518687 + Ref: + Container Name: kafka + Vpa Object Name: kafka-prod-broker + Total Samples Count: 6 + Version: v3 + Conditions: + Last Transition Time: 2024-08-27T06:21:32Z + Message: Successfully created kafkaOpsRequest demo/kfops-kafka-prod-broker-f6qbth + Observed Generation: 1 + Reason: CreateOpsRequest + Status: True + Type: CreateOpsRequest + Vpas: + Conditions: + Last Transition Time: 2024-08-27T06:21:10Z + Status: True + Type: RecommendationProvided + Recommendation: + Container Recommendations: + Container Name: kafka + Lower Bound: + Cpu: 600m + Memory: 1536Mi + Target: + Cpu: 813m + Memory: 1536Mi + Uncapped Target: + Cpu: 813m + Memory: 442809964 + Upper Bound: + Cpu: 1 + Memory: 2Gi + Vpa Name: kafka-prod-broker +Events: +``` +So, the `kafkaautoscaler` resource is created successfully. + +you can see in the `Status.VPAs.Recommendation` section, that recommendation has been generated for our database. Our autoscaler operator continuously watches the recommendation generated and creates an `kafkaopsrequest` based on the recommendations, if the database pods resources are needed to scaled up or down. + +Let's watch the `kafkaopsrequest` in the demo namespace to see if any `kafkaopsrequest` object is created. After some time you'll see that a `kafkaopsrequest` will be created based on the recommendation. + +```bash +$ watch kubectl get kafkaopsrequest -n demo +Every 2.0s: kubectl get kafkaopsrequest -n demo +NAME TYPE STATUS AGE +kfops-kafka-prod-broker-f6qbth VerticalScaling Progressing 10s +``` + +Let's wait for the ops request to become successful. + +```bash +$ kubectl get kafkaopsrequest -n demo +NAME TYPE STATUS AGE +kfops-kafka-prod-broker-f6qbth VerticalScaling Successful 3m2s +``` + +We can see from the above output that the `KafkaOpsRequest` has succeeded. If we describe the `KafkaOpsRequest` we will get an overview of the steps that were followed to scale the cluster. + +```bash +$ kubectl describe kafkaopsrequests -n demo kfops-kafka-prod-broker-f6qbth +Name: kfops-kafka-prod-broker-f6qbth +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=kafka-prod + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=kafkas.kubedb.com +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: KafkaOpsRequest +Metadata: + Creation Timestamp: 2024-08-27T06:21:32Z + Generation: 1 + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: KafkaAutoscaler + Name: kf-broker-autoscaler + UID: 7e3be99f-cd4d-440a-a477-8e8994840ebb + Resource Version: 1113011 + UID: a040a45b-135c-454a-8ddd-d4bd5000ffba +Spec: + Apply: IfReady + Database Ref: + Name: kafka-prod + Timeout: 5m0s + Type: VerticalScaling + Vertical Scaling: + Broker: + Resources: + Limits: + Memory: 1536Mi + Requests: + Cpu: 813m + Memory: 1536Mi +Status: + Conditions: + Last Transition Time: 2024-08-27T06:21:32Z + Message: Kafka ops-request has started to vertically scaling the kafka nodes + Observed Generation: 1 + Reason: VerticalScaling + Status: True + Type: VerticalScaling + Last Transition Time: 2024-08-27T06:21:35Z + Message: Successfully updated PetSets Resources + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2024-08-27T06:21:40Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-broker-0 + Last Transition Time: 2024-08-27T06:21:41Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-broker-0 + Last Transition Time: 2024-08-27T06:21:55Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-broker-0 + Last Transition Time: 2024-08-27T06:22:00Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-broker-1 + Last Transition Time: 2024-08-27T06:22:01Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-broker-1 + Last Transition Time: 2024-08-27T06:22:21Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-broker-1 + Last Transition Time: 2024-08-27T06:22:25Z + Message: Successfully Restarted Pods With Resources + Observed Generation: 1 + Reason: RestartPods + Status: True + Type: RestartPods + Last Transition Time: 2024-08-27T06:22:26Z + Message: Successfully completed the vertical scaling for kafka + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 4m55s KubeDB Ops-manager Operator Start processing for KafkaOpsRequest: demo/kfops-kafka-prod-broker-f6qbth + Normal Starting 4m55s KubeDB Ops-manager Operator Pausing Kafka databse: demo/kafka-prod + Normal Successful 4m55s KubeDB Ops-manager Operator Successfully paused Kafka database: demo/kafka-prod for KafkaOpsRequest: kfops-kafka-prod-broker-f6qbth + Normal UpdatePetSets 4m52s KubeDB Ops-manager Operator Successfully updated PetSets Resources + Warning get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 4m47s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 4m46s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-broker-0 4m42s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-broker-0 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 4m32s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 + Warning get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 4m27s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 4m26s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-broker-1 4m22s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-broker-1 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 4m7s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 + Normal RestartPods 4m2s KubeDB Ops-manager Operator Successfully Restarted Pods With Resources + Normal Starting 4m1s KubeDB Ops-manager Operator Resuming Kafka database: demo/kafka-prod + Normal Successful 4m1s KubeDB Ops-manager Operator Successfully resumed Kafka database: demo/kafka-prod for KafkaOpsRequest: kfops-kafka-prod-broker-f6qbth +``` + +Now, we are going to verify from the Pod, and the Kafka yaml whether the resources of the broker node has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo kafka-prod-broker-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "memory": "1536Mi" + }, + "requests": { + "cpu": "600m", + "memory": "1536Mi" + } +} + + +$ kubectl get kafka -n demo kafka-prod -o json | jq '.spec.topology.broker.podTemplate.spec.containers[].resources' +{ + "limits": { + "memory": "1536Mi" + }, + "requests": { + "cpu": "600m", + "memory": "1536Mi" + } +} +``` + +## Kafka Topology Autoscaler(Controller) + +Let's check the Controller Pod containers resources, + +```bash +$ kubectl get pod -n demo kafka-prod-controller-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "memory": "1Gi" + }, + "requests": { + "cpu": "500m", + "memory": "1Gi" + } +} +``` + +Let's check the Kafka resources for broker, +```bash +$ kubectl get kafka -n demo kafka-prod -o json | jq '.spec.topology.controller.podTemplate.spec.containers[].resources' +{ + "limits": { + "memory": "1Gi" + }, + "requests": { + "cpu": "500m", + "memory": "1Gi" + } +} +``` + +You can see from the above outputs that the resources for controller are same as the one we have assigned while deploying the kafka. + +We are now ready to apply the `KafkaAutoscaler` CRO to set up autoscaling for this broker nodes. + +### Compute Resource Autoscaling + +Here, we are going to set up compute resource autoscaling using a KafkaAutoscaler Object. + +#### Create KafkaAutoscaler Object + +In order to set up compute resource autoscaling for this topology cluster, we have to create a `KafkaAutoscaler` CRO with our desired configuration. Below is the YAML of the `KafkaAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: KafkaAutoscaler +metadata: + name: kf-controller-autoscaler + namespace: demo +spec: + databaseRef: + name: kafka-prod + opsRequestOptions: + timeout: 5m + apply: IfReady + compute: + controller: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 20 + minAllowed: + cpu: 600m + memory: 1.5Gi + maxAllowed: + cpu: 1 + memory: 2Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing compute resource scaling operation on `kafka-prod` cluster. +- `spec.compute.controller.trigger` specifies that compute autoscaling is enabled for this node. +- `spec.compute.controller.podLifeTimeThreshold` specifies the minimum lifetime for at least one of the pod to initiate a vertical scaling. +- `spec.compute.controller.resourceDiffPercentage` specifies the minimum resource difference in percentage. The default is 10%. + If the difference between current & recommended resource is less than ResourceDiffPercentage, Autoscaler Operator will ignore the updating. +- `spec.compute.controller.minAllowed` specifies the minimum allowed resources for the cluster. +- `spec.compute.controller.maxAllowed` specifies the maximum allowed resources for the cluster. +- `spec.compute.controller.controlledResources` specifies the resources that are controlled by the autoscaler. +- `spec.compute.controller.containerControlledValues` specifies which resource values should be controlled. The default is "RequestsAndLimits". +- `spec.opsRequestOptions` contains the options to pass to the created OpsRequest. It has 2 fields. + - `timeout` specifies the timeout for the OpsRequest. + - `apply` specifies when the OpsRequest should be applied. The default is "IfReady". + +Let's create the `KafkaAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/autoscaler/compute/kafka-controller-autoscaler.yaml +kafkaautoscaler.autoscaling.kubedb.com/kf-controller-autoscaler created +``` + +#### Verify Autoscaling is set up successfully + +Let's check that the `kafkaautoscaler` resource is created successfully, + +```bash +$ kubectl describe kafkaautoscaler kf-controller-autoscaler -n demo +Name: kf-controller-autoscaler +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.kubedb.com/v1alpha1 +Kind: KafkaAutoscaler +Metadata: + Creation Timestamp: 2024-08-27T06:29:45Z + Generation: 1 + Owner References: + API Version: kubedb.com/v1 + Block Owner Deletion: true + Controller: true + Kind: Kafka + Name: kafka-prod + UID: 7cee41e0-259c-4a5e-856a-e8ca90056120 + Resource Version: 1116548 + UID: 49461872-3628-4bc2-8692-f147bc55aa49 +Spec: + Compute: + Controller: + Container Controlled Values: RequestsAndLimits + Controlled Resources: + cpu + memory + Max Allowed: + Cpu: 1 + Memory: 2Gi + Min Allowed: + Cpu: 600m + Memory: 1536Mi + Pod Life Time Threshold: 5m0s + Resource Diff Percentage: 20 + Trigger: On + Database Ref: + Name: kafka-prod + Ops Request Options: + Apply: IfReady + Timeout: 5m0s +Status: + Checkpoints: + Cpu Histogram: + Bucket Weights: + Index: 1 + Weight: 10000 + Index: 3 + Weight: 4666 + Reference Timestamp: 2024-08-27T06:30:00Z + Total Weight: 0.3085514112801626 + First Sample Start: 2024-08-27T06:29:52Z + Last Sample Start: 2024-08-27T06:30:49Z + Last Update Time: 2024-08-27T06:31:11Z + Memory Histogram: + Reference Timestamp: 2024-08-27T06:35:00Z + Ref: + Container Name: kafka + Vpa Object Name: kafka-prod-controller + Total Samples Count: 3 + Version: v3 + Conditions: + Last Transition Time: 2024-08-27T06:30:32Z + Message: Successfully created kafkaOpsRequest demo/kfops-kafka-prod-controller-3vlvzr + Observed Generation: 1 + Reason: CreateOpsRequest + Status: True + Type: CreateOpsRequest + Vpas: + Conditions: + Last Transition Time: 2024-08-27T06:30:11Z + Status: True + Type: RecommendationProvided + Recommendation: + Container Recommendations: + Container Name: kafka + Lower Bound: + Cpu: 600m + Memory: 1536Mi + Target: + Cpu: 600m + Memory: 1536Mi + Uncapped Target: + Cpu: 100m + Memory: 297164212 + Upper Bound: + Cpu: 1 + Memory: 2Gi + Vpa Name: kafka-prod-controller +Events: +``` +So, the `kafkaautoscaler` resource is created successfully. + +you can see in the `Status.VPAs.Recommendation` section, that recommendation has been generated for our controller cluster. Our autoscaler operator continuously watches the recommendation generated and creates an `kafkaopsrequest` based on the recommendations, if the controller node pods resources are needed to scaled up or down. + +Let's watch the `kafkaopsrequest` in the demo namespace to see if any `kafkaopsrequest` object is created. After some time you'll see that a `kafkaopsrequest` will be created based on the recommendation. + +```bash +$ watch kubectl get kafkaopsrequest -n demo +Every 2.0s: kubectl get kafkaopsrequest -n demo +NAME TYPE STATUS AGE +kfops-kafka-prod-controller-3vlvzr VerticalScaling Progressing 10s +``` + +Let's wait for the ops request to become successful. + +```bash +$ kubectl get kafkaopsrequest -n demo +NAME TYPE STATUS AGE +kfops-kafka-prod-controller-3vlvzr VerticalScaling Successful 3m2s +``` + +We can see from the above output that the `KafkaOpsRequest` has succeeded. If we describe the `KafkaOpsRequest` we will get an overview of the steps that were followed to scale the cluster. + +```bash +$ kubectl describe kafkaopsrequests -n demo kfops-kafka-prod-controller-3vlvzr +Name: kfops-kafka-prod-controller-3vlvzr +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=kafka-prod + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=kafkas.kubedb.com +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: KafkaOpsRequest +Metadata: + Creation Timestamp: 2024-08-27T06:30:32Z + Generation: 1 + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: KafkaAutoscaler + Name: kf-controller-autoscaler + UID: 49461872-3628-4bc2-8692-f147bc55aa49 + Resource Version: 1117285 + UID: 22228813-bf11-4d8a-9bea-53a1995fe4d0 +Spec: + Apply: IfReady + Database Ref: + Name: kafka-prod + Timeout: 5m0s + Type: VerticalScaling + Vertical Scaling: + Controller: + Resources: + Limits: + Memory: 1536Mi + Requests: + Cpu: 600m + Memory: 1536Mi +Status: + Conditions: + Last Transition Time: 2024-08-27T06:30:32Z + Message: Kafka ops-request has started to vertically scaling the kafka nodes + Observed Generation: 1 + Reason: VerticalScaling + Status: True + Type: VerticalScaling + Last Transition Time: 2024-08-27T06:30:35Z + Message: Successfully updated PetSets Resources + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2024-08-27T06:30:40Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-controller-0 + Last Transition Time: 2024-08-27T06:30:40Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-controller-0 + Last Transition Time: 2024-08-27T06:31:11Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-controller-0 + Last Transition Time: 2024-08-27T06:31:15Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-controller-1 + Last Transition Time: 2024-08-27T06:31:16Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-controller-1 + Last Transition Time: 2024-08-27T06:31:30Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-controller-1 + Last Transition Time: 2024-08-27T06:31:35Z + Message: Successfully Restarted Pods With Resources + Observed Generation: 1 + Reason: RestartPods + Status: True + Type: RestartPods + Last Transition Time: 2024-08-27T06:31:36Z + Message: Successfully completed the vertical scaling for kafka + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 2m33s KubeDB Ops-manager Operator Start processing for KafkaOpsRequest: demo/kfops-kafka-prod-controller-3vlvzr + Normal Starting 2m33s KubeDB Ops-manager Operator Pausing Kafka databse: demo/kafka-prod + Normal Successful 2m33s KubeDB Ops-manager Operator Successfully paused Kafka database: demo/kafka-prod for KafkaOpsRequest: kfops-kafka-prod-controller-3vlvzr + Normal UpdatePetSets 2m30s KubeDB Ops-manager Operator Successfully updated PetSets Resources + Warning get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 2m25s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 2m25s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-controller-0 2m20s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-controller-0 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 115s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 + Warning get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 110s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 109s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-controller-1 105s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-controller-1 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 95s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 + Normal RestartPods 90s KubeDB Ops-manager Operator Successfully Restarted Pods With Resources + Normal Starting 90s KubeDB Ops-manager Operator Resuming Kafka database: demo/kafka-prod + Normal Successful 90s KubeDB Ops-manager Operator Successfully resumed Kafka database: demo/kafka-prod for KafkaOpsRequest: kfops-kafka-prod-controller-3vlvzr +``` + +Now, we are going to verify from the Pod, and the Kafka yaml whether the resources of the controller node has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo kafka-prod-controller-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "memory": "1536Mi" + }, + "requests": { + "cpu": "600m", + "memory": "1536Mi" + } +} + + +$ kubectl get kafka -n demo kafka-prod -o json | jq '.spec.topology.controller.podTemplate.spec.containers[].resources' +{ + "limits": { + "memory": "1536Mi" + }, + "requests": { + "cpu": "600m", + "memory": "1536Mi" + } +} +``` + +The above output verifies that we have successfully auto scaled the resources of the Kafka topology cluster for broker and controller. You can create a similar `KafkaAutoscaler` object with both broker and controller resources to auto scale the resources of the Kafka topology cluster. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete kafkaopsrequest -n demo kfops-kafka-prod-broker-f6qbth kfops-kafka-prod-controller-3vlvzr +kubectl delete kafkaautoscaler -n demo kf-broker-autoscaler kf-controller-autoscaler +kubectl delete kf -n demo kafka-prod +kubectl delete ns demo +``` +## Next Steps + +- Detail concepts of [Kafka object](/docs/guides/kafka/concepts/kafka.md). +- Different Kafka topology clustering modes [here](/docs/guides/kafka/clustering/_index.md). +- Monitor your Kafka database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). + +[//]: # (- Monitor your Kafka database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/kafka/monitoring/using-builtin-prometheus.md).) +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/kafka/autoscaler/storage/_index.md b/docs/guides/kafka/autoscaler/storage/_index.md new file mode 100644 index 0000000000..00a2e315fc --- /dev/null +++ b/docs/guides/kafka/autoscaler/storage/_index.md @@ -0,0 +1,10 @@ +--- +title: Storage Autoscaling +menu: + docs_{{ .version }}: + identifier: kf-storage-auto-scaling + name: Storage Autoscaling + parent: kf-auto-scaling + weight: 46 +menu_name: docs_{{ .version }} +--- diff --git a/docs/guides/kafka/autoscaler/storage/kafka-combined.md b/docs/guides/kafka/autoscaler/storage/kafka-combined.md new file mode 100644 index 0000000000..d885db2ff5 --- /dev/null +++ b/docs/guides/kafka/autoscaler/storage/kafka-combined.md @@ -0,0 +1,469 @@ +--- +title: Kafka Combined Autoscaling +menu: + docs_{{ .version }}: + identifier: kf-storage-auto-scaling-combined + name: Combined Cluster + parent: kf-storage-auto-scaling + weight: 20 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Storage Autoscaling of a Kafka Combined Cluster + +This guide will show you how to use `KubeDB` to autoscale the storage of a Kafka Combined cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Provisioner, Ops-manager and Autoscaler operator in your cluster following the steps [here](/docs/setup/README.md). + +- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation) + +- Install Prometheus from [here](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) + +- You must have a `StorageClass` that supports volume expansion. + +- You should be familiar with the following `KubeDB` concepts: + - [Kafka](/docs/guides/kafka/concepts/kafka.md) + - [KafkaAutoscaler](/docs/guides/kafka/concepts/kafkaautoscaler.md) + - [KafkaOpsRequest](/docs/guides/kafka/concepts/kafkaopsrequest.md) + - [Storage Autoscaling Overview](/docs/guides/kafka/autoscaler/storage/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/kafka](/docs/examples/kafka) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Storage Autoscaling of Combined Cluster + +At first verify that your cluster has a storage class, that supports volume expansion. Let's check, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) kubernetes.io/gce-pd Delete Immediate true 2m49s +``` + +We can see from the output the `standard` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We can use it. + +Now, we are going to deploy a `Kafka` combined using a supported version by `KubeDB` operator. Then we are going to apply `KafkaAutoscaler` to set up autoscaling. + +#### Deploy Kafka combined + +In this section, we are going to deploy a Kafka combined cluster with version `4.4.26`. Then, in the next section we will set up autoscaling for this cluster using `KafkaAutoscaler` CRD. Below is the YAML of the `Kafka` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-dev + namespace: demo +spec: + replicas: 2 + version: 3.6.1 + podTemplate: + spec: + containers: + - name: kafka + resources: + limits: + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut +``` + +Let's create the `Kafka` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/autoscaler/kafka-combined.yaml +kafka.kubedb.com/kafka-dev created +``` + +Now, wait until `kafka-dev` has status `Ready`. i.e, + +```bash +$ kubectl get kf -n demo -w +NAME TYPE VERSION STATUS AGE +kafka-dev kubedb.com/v1 3.6.1 Provisioning 0s +kafka-dev kubedb.com/v1 3.6.1 Provisioning 24s +. +. +kafka-dev kubedb.com/v1 3.6.1 Ready 92s +``` + +Let's check volume size from petset, and from the persistent volume, + +```bash +$ kubectl get petset -n demo kafka-dev -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-129be4b9-f7e8-489e-8bc5-cd420e680f51 1Gi RWO Delete Bound demo/kafka-dev-data-kafka-dev-0 longhorn 40s +pvc-f068d245-718b-4561-b452-f3130bb260f6 1Gi RWO Delete Bound demo/kafka-dev-data-kafka-dev-1 longhorn 35s +``` + +You can see the petset has 1GB storage, and the capacity of all the persistent volume is also 1GB. + +We are now ready to apply the `KafkaAutoscaler` CRO to set up storage autoscaling for this cluster. + +### Storage Autoscaling + +Here, we are going to set up storage autoscaling using a KafkaAutoscaler Object. + +#### Create KafkaAutoscaler Object + +In order to set up vertical autoscaling for this combined cluster, we have to create a `KafkaAutoscaler` CRO with our desired configuration. Below is the YAML of the `KafkaAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: KafkaAutoscaler +metadata: + name: kf-storage-autoscaler-combined + namespace: demo +spec: + databaseRef: + name: kafka-dev + storage: + node: + expansionMode: "Online" + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 +``` + +Here, + +- `spec.clusterRef.name` specifies that we are performing vertical scaling operation on `kafka-dev` cluster. +- `spec.storage.node.trigger` specifies that storage autoscaling is enabled for this cluster. +- `spec.storage.node.usageThreshold` specifies storage usage threshold, if storage usage exceeds `60%` then storage autoscaling will be triggered. +- `spec.storage.node.scalingThreshold` specifies the scaling threshold. Storage will be scaled to `50%` of the current amount. +- It has another field `spec.storage.node.expansionMode` to set the opsRequest volumeExpansionMode, which support two values: `Online` & `Offline`. Default value is `Online`. + +Let's create the `KafkaAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/autoscaling/storage/kafka-storage-autoscaler-combined.yaml +kafkaautoscaler.autoscaling.kubedb.com/kf-storage-autoscaler-combined created +``` + +#### Storage Autoscaling is set up successfully + +Let's check that the `kafkaautoscaler` resource is created successfully, + +```bash +NAME AGE +kf-storage-autoscaler-combined 8s + + +$ kubectl describe kafkaautoscaler -n demo kf-storage-autoscaler-combined +Name: kf-storage-autoscaler-combined +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.kubedb.com/v1alpha1 +Kind: KafkaAutoscaler +Metadata: + Creation Timestamp: 2024-08-27T06:56:57Z + Generation: 1 + Owner References: + API Version: kubedb.com/v1 + Block Owner Deletion: true + Controller: true + Kind: Kafka + Name: kafka-dev + UID: a1d1b2f9-ef72-4ef6-8652-f39ee548c744 + Resource Version: 1123501 + UID: 83c7a7b6-aaf2-4776-8337-114bd1800d7c +Spec: + Database Ref: + Name: kafka-dev + Ops Request Options: + Apply: IfReady + Storage: + Node: + Expansion Mode: Online + Scaling Rules: + Applies Upto: + Threshold: 50pc + Scaling Threshold: 50 + Trigger: On + Usage Threshold: 60 +Events: +``` +So, the `kafkaautoscaler` resource is created successfully. + +Now, for this demo, we are going to manually fill up the persistent volume to exceed the `usageThreshold` using `dd` command to see if storage autoscaling is working or not. + +Let's exec into the cluster pod and fill the cluster volume using the following commands: + +```bash + $ kubectl exec -it -n demo kafka-dev-0 -- bash +kafka@kafka-dev-0:~$ df -h /var/log/kafka +Filesystem Size Used Avail Use% Mounted on +/dev/standard/pvc-129be4b9-f7e8-489e-8bc5-cd420e680f51 974M 168K 958M 1% /var/log/kafka +kafka@kafka-dev-0:~$ dd if=/dev/zero of=/var/log/kafka/file.img bs=600M count=1 +1+0 records in +1+0 records out +629145600 bytes (629 MB, 600 MiB) copied, 7.44144 s, 84.5 MB/s +kafka@kafka-dev-0:~$ df -h /var/log/kafka +Filesystem Size Used Avail Use% Mounted on +/dev/standard/pvc-129be4b9-f7e8-489e-8bc5-cd420e680f51 974M 601M 358M 63% /var/log/kafka +``` + +So, from the above output we can see that the storage usage is 83%, which exceeded the `usageThreshold` 60%. + +Let's watch the `kafkaopsrequest` in the demo namespace to see if any `kafkaopsrequest` object is created. After some time you'll see that a `kafkaopsrequest` of type `VolumeExpansion` will be created based on the `scalingThreshold`. + +```bash +$ watch kubectl get kafkaopsrequest -n demo +Every 2.0s: kubectl get kafkaopsrequest -n demo +NAME TYPE STATUS AGE +kfops-kafka-dev-sa4thn VolumeExpansion Progressing 10s +``` + +Let's wait for the ops request to become successful. + +```bash +$ kubectl get kafkaopsrequest -n demo +NAME TYPE STATUS AGE +kfops-kafka-dev-sa4thn VolumeExpansion Successful 97s +``` + +We can see from the above output that the `KafkaOpsRequest` has succeeded. If we describe the `KafkaOpsRequest` we will get an overview of the steps that were followed to expand the volume of the cluster. + +```bash +$ kubectl describe kafkaopsrequests -n demo kfops-kafka-dev-sa4thn +Name: kfops-kafka-dev-sa4thn +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=kafka-dev + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=kafkas.kubedb.com +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: KafkaOpsRequest +Metadata: + Creation Timestamp: 2024-08-27T08:12:33Z + Generation: 1 + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: KafkaAutoscaler + Name: kf-storage-autoscaler-combined + UID: a0ce73df-0d42-483a-9c47-ca58e57ea614 + Resource Version: 1135462 + UID: 78b52373-75f9-40a1-8528-3d0cd9beb4c5 +Spec: + Apply: IfReady + Database Ref: + Name: kafka-dev + Type: VolumeExpansion + Volume Expansion: + Mode: Online + Node: 1531054080 +Status: + Conditions: + Last Transition Time: 2024-08-27T08:12:33Z + Message: Kafka ops-request has started to expand volume of kafka nodes. + Observed Generation: 1 + Reason: VolumeExpansion + Status: True + Type: VolumeExpansion + Last Transition Time: 2024-08-27T08:12:41Z + Message: get pet set; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetPetSet + Last Transition Time: 2024-08-27T08:12:41Z + Message: is petset deleted; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: IsPetsetDeleted + Last Transition Time: 2024-08-27T08:12:51Z + Message: successfully deleted the petSets with orphan propagation policy + Observed Generation: 1 + Reason: OrphanPetSetPods + Status: True + Type: OrphanPetSetPods + Last Transition Time: 2024-08-27T08:12:56Z + Message: get pvc; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetPvc + Last Transition Time: 2024-08-27T08:12:56Z + Message: is pvc patched; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: IsPvcPatched + Last Transition Time: 2024-08-27T08:18:16Z + Message: compare storage; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: CompareStorage + Last Transition Time: 2024-08-27T08:18:21Z + Message: successfully updated combined node PVC sizes + Observed Generation: 1 + Reason: UpdateCombinedNodePVCs + Status: True + Type: UpdateCombinedNodePVCs + Last Transition Time: 2024-08-27T08:18:27Z + Message: successfully reconciled the Kafka resources + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2024-08-27T08:18:32Z + Message: PetSet is recreated + Observed Generation: 1 + Reason: ReadyPetSets + Status: True + Type: ReadyPetSets + Last Transition Time: 2024-08-27T08:18:32Z + Message: Successfully completed volumeExpansion for kafka + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 6m19s KubeDB Ops-manager Operator Start processing for KafkaOpsRequest: demo/kfops-kafka-dev-sa4thn + Normal Starting 6m19s KubeDB Ops-manager Operator Pausing Kafka databse: demo/kafka-dev + Normal Successful 6m19s KubeDB Ops-manager Operator Successfully paused Kafka database: demo/kafka-dev for KafkaOpsRequest: kfops-kafka-dev-sa4thn + Warning get pet set; ConditionStatus:True 6m11s KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Warning is petset deleted; ConditionStatus:True 6m11s KubeDB Ops-manager Operator is petset deleted; ConditionStatus:True + Warning get pet set; ConditionStatus:True 6m6s KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Normal OrphanPetSetPods 6m1s KubeDB Ops-manager Operator successfully deleted the petSets with orphan propagation policy + Warning get pvc; ConditionStatus:True 5m56s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning is pvc patched; ConditionStatus:True 5m56s KubeDB Ops-manager Operator is pvc patched; ConditionStatus:True + Warning get pvc; ConditionStatus:True 5m51s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:False 5m51s KubeDB Ops-manager Operator compare storage; ConditionStatus:False + Warning get pvc; ConditionStatus:True 5m46s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 5m41s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 5m36s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 5m31s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 5m26s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 5m21s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 5m16s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 5m11s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 5m6s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 5m1s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m56s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m51s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m46s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m41s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m36s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m31s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m26s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m21s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m16s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m11s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m6s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m1s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m56s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m51s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m46s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m41s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m36s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m31s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m26s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m21s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:True 3m21s KubeDB Ops-manager Operator compare storage; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m16s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning is pvc patched; ConditionStatus:True 3m16s KubeDB Ops-manager Operator is pvc patched; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m11s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:False 3m11s KubeDB Ops-manager Operator compare storage; ConditionStatus:False + Warning get pvc; ConditionStatus:True 3m6s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m1s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 2m56s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 2m51s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 2m46s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 2m41s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 2m36s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 2m31s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 2m26s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 2m21s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 2m16s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 2m11s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 2m6s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 2m1s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 116s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 111s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 106s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 101s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 96s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 91s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 86s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 81s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 76s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 71s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 66s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 61s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 56s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 51s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 46s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 41s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 36s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:True 36s KubeDB Ops-manager Operator compare storage; ConditionStatus:True + Normal UpdateCombinedNodePVCs 31s KubeDB Ops-manager Operator successfully updated combined node PVC sizes + Normal UpdatePetSets 25s KubeDB Ops-manager Operator successfully reconciled the Kafka resources + Warning get pet set; ConditionStatus:True 20s KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Normal ReadyPetSets 20s KubeDB Ops-manager Operator PetSet is recreated + Normal Starting 20s KubeDB Ops-manager Operator Resuming Kafka database: demo/kafka-dev + Normal Successful 20s KubeDB Ops-manager Operator Successfully resumed Kafka database: demo/kafka-dev for KafkaOpsRequest: kfops-kafka-dev-sa4thn +``` + +Now, we are going to verify from the `Petset`, and the `Persistent Volume` whether the volume of the combined cluster has expanded to meet the desired state, Let's check, + +```bash +$ kubectl get petset -n demo kafka-dev -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1531054080" +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-129be4b9-f7e8-489e-8bc5-cd420e680f51 1462Mi RWO Delete Bound demo/kafka-dev-data-kafka-dev-0 longhorn 30m5s +pvc-f068d245-718b-4561-b452-f3130bb260f6 1462Mi RWO Delete Bound demo/kafka-dev-data-kafka-dev-1 longhorn 30m1s +``` + +The above output verifies that we have successfully autoscaled the volume of the Kafka combined cluster. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete kafkaopsrequests -n demo kfops-kafka-dev-sa4thn +kubectl delete kafkautoscaler -n demo kf-storage-autoscaler-combined +kubectl delete kf -n demo kafka-dev +``` + +## Next Steps + +- Detail concepts of [Kafka object](/docs/guides/kafka/concepts/kafka.md). +- Different Kafka topology clustering modes [here](/docs/guides/kafka/clustering/_index.md). +- Monitor your Kafka database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). + +[//]: # (- Monitor your Kafka database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/kafka/monitoring/using-builtin-prometheus.md).) +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/kafka/autoscaler/storage/kafka-topology.md b/docs/guides/kafka/autoscaler/storage/kafka-topology.md new file mode 100644 index 0000000000..d9f8f5858d --- /dev/null +++ b/docs/guides/kafka/autoscaler/storage/kafka-topology.md @@ -0,0 +1,684 @@ +--- +title: Kafka Topology Autoscaling +menu: + docs_{{ .version }}: + identifier: kf-storage-auto-scaling-topology + name: Topology Cluster + parent: kf-storage-auto-scaling + weight: 20 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Storage Autoscaling of a Kafka Topology Cluster + +This guide will show you how to use `KubeDB` to autoscale the storage of a Kafka Topology cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Provisioner, Ops-manager and Autoscaler operator in your cluster following the steps [here](/docs/setup/README.md). + +- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation) + +- Install Prometheus from [here](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) + +- You must have a `StorageClass` that supports volume expansion. + +- You should be familiar with the following `KubeDB` concepts: + - [Kafka](/docs/guides/kafka/concepts/kafka.md) + - [KafkaAutoscaler](/docs/guides/kafka/concepts/kafkaautoscaler.md) + - [KafkaOpsRequest](/docs/guides/kafka/concepts/kafkaopsrequest.md) + - [Storage Autoscaling Overview](/docs/guides/kafka/autoscaler/storage/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/kafka](/docs/examples/kafka) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Storage Autoscaling of Topology Cluster + +At first verify that your cluster has a storage class, that supports volume expansion. Let's check, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) kubernetes.io/gce-pd Delete Immediate true 2m49s +``` + +We can see from the output the `standard` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We can use it. + +Now, we are going to deploy a `Kafka` topology using a supported version by `KubeDB` operator. Then we are going to apply `KafkaAutoscaler` to set up autoscaling. + +#### Deploy Kafka topology + +In this section, we are going to deploy a Kafka topology cluster with version `4.4.26`. Then, in the next section we will set up autoscaling for this cluster using `KafkaAutoscaler` CRD. Below is the YAML of the `Kafka` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-prod + namespace: demo +spec: + version: 3.6.1 + topology: + broker: + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + controller: + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut +``` + +Let's create the `Kafka` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/autoscaler/kafka-topology.yaml +kafka.kubedb.com/kafka-prod created +``` + +Now, wait until `kafka-dev` has status `Ready`. i.e, + +```bash +$ kubectl get kf -n demo -w +NAME TYPE VERSION STATUS AGE +kafka-prod kubedb.com/v1 3.6.1 Provisioning 0s +kafka-prod kubedb.com/v1 3.6.1 Provisioning 24s +. +. +kafka-prod kubedb.com/v1 3.6.1 Ready 119s +``` + +Let's check volume size from petset, and from the persistent volume, + +```bash +$ kubectl get petset -n demo kafka-prod-broker -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" +$ kubectl get petset -n demo kafka-prod-controller -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-128d9138-64da-4021-8a7c-7ca80823e842 1Gi RWO Delete Bound demo/kafka-prod-data-kafka-prod-controller-1 longhorn 33s +pvc-27fe9102-2e7d-41e0-b77d-729a82c64e21 1Gi RWO Delete Bound demo/kafka-prod-data-kafka-prod-broker-0 longhorn 51s +pvc-3bb98ba1-9cea-46ad-857f-fc843c265d57 1Gi RWO Delete Bound demo/kafka-prod-data-kafka-prod-controller-0 longhorn 50s +pvc-68f86aac-33d1-423a-bc56-8a905b546db2 1Gi RWO Delete Bound demo/kafka-prod-data-kafka-prod-broker-1 longhorn 32s +``` + +You can see the petset for both broker and controller has 1GB storage, and the capacity of all the persistent volume is also 1GB. + +We are now ready to apply the `KafkaAutoscaler` CRO to set up storage autoscaling for this cluster(broker and controller). + +### Storage Autoscaling + +Here, we are going to set up storage autoscaling using a KafkaAutoscaler Object. + +#### Create KafkaAutoscaler Object + +In order to set up vertical autoscaling for this topology cluster, we have to create a `KafkaAutoscaler` CRO with our desired configuration. Below is the YAML of the `KafkaAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: KafkaAutoscaler +metadata: + name: kf-storage-autoscaler-topology + namespace: demo +spec: + databaseRef: + name: kafka-prod + storage: + broker: + expansionMode: "Online" + trigger: "On" + usageThreshold: 60 + scalingThreshold: 100 + controller: + expansionMode: "Online" + trigger: "On" + usageThreshold: 60 + scalingThreshold: 100 +``` + +Here, + +- `spec.clusterRef.name` specifies that we are performing vertical scaling operation on `kafka-prod` cluster. +- `spec.storage.broker.trigger/spec.storage.controller.trigger` specifies that storage autoscaling is enabled for broker and controller of topology cluster. +- `spec.storage.broker.usageThreshold/spec.storage.controller.usageThreshold` specifies storage usage threshold, if storage usage exceeds `60%` then storage autoscaling will be triggered. +- `spec.storage.broker.scalingThreshold/spec.storage.broker.scalingThreshold` specifies the scaling threshold. Storage will be scaled to `100%` of the current amount. +- It has another field `spec.storage.broker.expansionMode/spec.storage.controller.expansionMode` to set the opsRequest volumeExpansionMode, which support two values: `Online` & `Offline`. Default value is `Online`. + +Let's create the `KafkaAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/autoscaling/storage/kafka-storage-autoscaler-topology.yaml +kafkaautoscaler.autoscaling.kubedb.com/kf-storage-autoscaler-topology created +``` + +#### Storage Autoscaling is set up successfully + +Let's check that the `kafkaautoscaler` resource is created successfully, + +```bash +NAME AGE +kf-storage-autoscaler-topology 8s + +$ kubectl describe kafkaautoscaler -n demo kf-storage-autoscaler-topology +Name: kf-storage-autoscaler-topology +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.kubedb.com/v1alpha1 +Kind: KafkaAutoscaler +Metadata: + Creation Timestamp: 2024-08-27T08:54:35Z + Generation: 1 + Owner References: + API Version: kubedb.com/v1 + Block Owner Deletion: true + Controller: true + Kind: Kafka + Name: kafka-prod + UID: 1ae37155-dd92-4547-8aba-589140d1d2cf + Resource Version: 1142604 + UID: bca444d0-d860-4588-9b51-412c614c4771 +Spec: + Database Ref: + Name: kafka-prod + Ops Request Options: + Apply: IfReady + Storage: + Broker: + Expansion Mode: Online + Scaling Rules: + Applies Upto: + Threshold: 100pc + Scaling Threshold: 100 + Trigger: On + Usage Threshold: 60 + Controller: + Expansion Mode: Online + Scaling Rules: + Applies Upto: + Threshold: 100pc + Scaling Threshold: 100 + Trigger: On + Usage Threshold: 60 +Events: +``` +So, the `kafkaautoscaler` resource is created successfully. + +Now, for this demo, we are going to manually fill up the persistent volume to exceed the `usageThreshold` using `dd` command to see if storage autoscaling is working or not. + +We are autoscaling volume for both broker and controller. So we need to fill up the persistent volume for both broker and controller. + +1. Let's exec into the broker pod and fill the cluster volume using the following commands: + +```bash +$ kubectl exec -it -n demo kafka-prod-broker-0 -- bash +kafka@kafka-prod-broker-0:~$ df -h /var/log/kafka +Filesystem Size Used Avail Use% Mounted on +/dev/standard/pvc-27fe9102-2e7d-41e0-b77d-729a82c64e21 974M 256K 958M 1% /var/log/kafka +kafka@kafka-prod-broker-0:~$ dd if=/dev/zero of=/var/log/kafka/file.img bs=600M count=1 +1+0 records in +1+0 records out +629145600 bytes (629 MB, 600 MiB) copied, 5.58851 s, 113 MB/s +kafka@kafka-prod-broker-0:~$ df -h /var/log/kafka +Filesystem Size Used Avail Use% Mounted on +/dev/standard/pvc-27fe9102-2e7d-41e0-b77d-729a82c64e21 974M 601M 358M 63% /var/log/kafka +``` + +2. Let's exec into the controller pod and fill the cluster volume using the following commands: + +```bash +$ kubectl exec -it -n demo kafka-prod-controller-0 -- bash +kafka@kafka-prod-controller-0:~$ df -h /var/log/kafka +Filesystem Size Used Avail Use% Mounted on +/dev/standard/pvc-3bb98ba1-9cea-46ad-857f-fc843c265d57 974M 192K 958M 1% /var/log/kafka +kafka@kafka-prod-controller-0:~$ dd if=/dev/zero of=/var/log/kafka/file.img bs=600M count=1 +1+0 records in +1+0 records out +629145600 bytes (629 MB, 600 MiB) copied, 3.39618 s, 185 MB/s +kafka@kafka-prod-controller-0:~$ df -h /var/log/kafka +Filesystem Size Used Avail Use% Mounted on +/dev/standard/pvc-3bb98ba1-9cea-46ad-857f-fc843c265d57 974M 601M 358M 63% /var/log/kafka +``` + +So, from the above output we can see that the storage usage is 63% for both nodes, which exceeded the `usageThreshold` 60%. + +There will be two `KafkaOpsRequest` created for both broker and controller to expand the volume of the cluster for both nodes. +Let's watch the `kafkaopsrequest` in the demo namespace to see if any `kafkaopsrequest` object is created. After some time you'll see that a `kafkaopsrequest` of type `VolumeExpansion` will be created based on the `scalingThreshold`. + +```bash +$ watch kubectl get kafkaopsrequest -n demo +Every 2.0s: kubectl get kafkaopsrequest -n demo +NAME TYPE STATUS AGE +kfops-kafka-prod-7qwpbn VolumeExpansion Progressing 10s +``` + +Let's wait for the ops request to become successful. + +```bash +$ kubectl get kafkaopsrequest -n demo +NAME TYPE STATUS AGE +kfops-kafka-prod-7qwpbn VolumeExpansion Successful 2m37s +``` + +We can see from the above output that the `KafkaOpsRequest` has succeeded. If we describe the `KafkaOpsRequest` we will get an overview of the steps that were followed to expand the volume of the cluster. + +```bash +$ kubectl describe kafkaopsrequests -n demo kfops-kafka-prod-7qwpbn +Name: kfops-kafka-prod-7qwpbn +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=kafka-prod + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=kafkas.kubedb.com +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: KafkaOpsRequest +Metadata: + Creation Timestamp: 2024-08-27T08:59:43Z + Generation: 1 + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: KafkaAutoscaler + Name: kf-storage-autoscaler-topology + UID: bca444d0-d860-4588-9b51-412c614c4771 + Resource Version: 1144249 + UID: 2a9bd422-c6ce-47c9-bfd6-ba7f79774c17 +Spec: + Apply: IfReady + Database Ref: + Name: kafka-prod + Type: VolumeExpansion + Volume Expansion: + Broker: 2041405440 + Mode: Online +Status: + Conditions: + Last Transition Time: 2024-08-27T08:59:43Z + Message: Kafka ops-request has started to expand volume of kafka nodes. + Observed Generation: 1 + Reason: VolumeExpansion + Status: True + Type: VolumeExpansion + Last Transition Time: 2024-08-27T08:59:51Z + Message: get pet set; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetPetSet + Last Transition Time: 2024-08-27T08:59:51Z + Message: is petset deleted; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: IsPetsetDeleted + Last Transition Time: 2024-08-27T09:00:01Z + Message: successfully deleted the petSets with orphan propagation policy + Observed Generation: 1 + Reason: OrphanPetSetPods + Status: True + Type: OrphanPetSetPods + Last Transition Time: 2024-08-27T09:00:06Z + Message: get pvc; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetPvc + Last Transition Time: 2024-08-27T09:00:06Z + Message: is pvc patched; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: IsPvcPatched + Last Transition Time: 2024-08-27T09:03:51Z + Message: compare storage; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: CompareStorage + Last Transition Time: 2024-08-27T09:03:56Z + Message: successfully updated broker node PVC sizes + Observed Generation: 1 + Reason: UpdateBrokerNodePVCs + Status: True + Type: UpdateBrokerNodePVCs + Last Transition Time: 2024-08-27T09:04:03Z + Message: successfully reconciled the Kafka resources + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2024-08-27T09:04:08Z + Message: PetSet is recreated + Observed Generation: 1 + Reason: ReadyPetSets + Status: True + Type: ReadyPetSets + Last Transition Time: 2024-08-27T09:04:08Z + Message: Successfully completed volumeExpansion for kafka + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 6m6s KubeDB Ops-manager Operator Start processing for KafkaOpsRequest: demo/kfops-kafka-prod-7qwpbn + Normal Starting 6m6s KubeDB Ops-manager Operator Pausing Kafka databse: demo/kafka-prod + Normal Successful 6m6s KubeDB Ops-manager Operator Successfully paused Kafka database: demo/kafka-prod for KafkaOpsRequest: kfops-kafka-prod-7qwpbn + Warning get pet set; ConditionStatus:True 5m58s KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Warning is petset deleted; ConditionStatus:True 5m58s KubeDB Ops-manager Operator is petset deleted; ConditionStatus:True + Warning get pet set; ConditionStatus:True 5m53s KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Normal OrphanPetSetPods 5m48s KubeDB Ops-manager Operator successfully deleted the petSets with orphan propagation policy + Warning get pvc; ConditionStatus:True 5m43s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning is pvc patched; ConditionStatus:True 5m43s KubeDB Ops-manager Operator is pvc patched; ConditionStatus:True + Warning get pvc; ConditionStatus:True 5m38s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:False 5m38s KubeDB Ops-manager Operator compare storage; ConditionStatus:False + Warning get pvc; ConditionStatus:True 5m33s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 5m28s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 5m23s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 5m18s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 5m13s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 5m8s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 5m3s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m58s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m53s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m48s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m43s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m38s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m33s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m28s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m23s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m18s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m13s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m8s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m3s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m58s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:True 3m58s KubeDB Ops-manager Operator compare storage; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m53s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning is pvc patched; ConditionStatus:True 3m53s KubeDB Ops-manager Operator is pvc patched; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m48s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:False 3m48s KubeDB Ops-manager Operator compare storage; ConditionStatus:False + Warning get pvc; ConditionStatus:True 3m43s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m38s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m33s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m28s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m23s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m18s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m13s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m8s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m3s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 2m58s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 2m53s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 2m48s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 2m43s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 2m38s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 2m33s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 2m28s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 2m23s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 2m18s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 2m13s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 2m8s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 2m3s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 118s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:True 118s KubeDB Ops-manager Operator compare storage; ConditionStatus:True + Normal UpdateBrokerNodePVCs 113s KubeDB Ops-manager Operator successfully updated broker node PVC sizes + Normal UpdatePetSets 106s KubeDB Ops-manager Operator successfully reconciled the Kafka resources + Warning get pet set; ConditionStatus:True 101s KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Normal ReadyPetSets 101s KubeDB Ops-manager Operator PetSet is recreated + Normal Starting 101s KubeDB Ops-manager Operator Resuming Kafka database: demo/kafka-prod + Normal Successful 101s KubeDB Ops-manager Operator Successfully resumed Kafka database: demo/kafka-prod for KafkaOpsRequest: kfops-kafka-prod-7qwpbn +``` + +After a few minutes, another `KafkaOpsRequest` of type `VolumeExpansion` will be created for the controller node. + +```bash +$ kubectl get kafkaopsrequest -n demo +NAME TYPE STATUS AGE +kfops-kafka-prod-7qwpbn VolumeExpansion Successful 2m47s +kfops-kafka-prod-sa4thn VolumeExpansion Progressing 10s +``` + +Let's wait for the ops request to become successful. + +```bash +$ kubectl get kafkaopsrequest -n demo +NAME TYPE STATUS AGE +kfops-kafka-prod-7qwpbn VolumeExpansion Successful 4m47s +kfops-kafka-prod-sa4thn VolumeExpansion Successful 2m10s +``` + +We can see from the above output that the `KafkaOpsRequest` `kfops-kafka-prod-sa4thn` has also succeeded. If we describe the `KafkaOpsRequest` we will get an overview of the steps that were followed to expand the volume of the cluster. + +```bash +$ kubectl describe kafkaopsrequests -n demo kfops-kafka-prod-2ta9m6 +Name: kfops-kafka-prod-2ta9m6 +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=kafka-prod + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=kafkas.kubedb.com +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: KafkaOpsRequest +Metadata: + Creation Timestamp: 2024-08-27T09:04:43Z + Generation: 1 + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: KafkaAutoscaler + Name: kf-storage-autoscaler-topology + UID: bca444d0-d860-4588-9b51-412c614c4771 + Resource Version: 1145309 + UID: c965e481-8dbd-4b1d-8a9a-40239753cbf0 +Spec: + Apply: IfReady + Database Ref: + Name: kafka-prod + Type: VolumeExpansion + Volume Expansion: + Controller: 2041405440 + Mode: Online +Status: + Conditions: + Last Transition Time: 2024-08-27T09:04:43Z + Message: Kafka ops-request has started to expand volume of kafka nodes. + Observed Generation: 1 + Reason: VolumeExpansion + Status: True + Type: VolumeExpansion + Last Transition Time: 2024-08-27T09:04:51Z + Message: get pet set; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetPetSet + Last Transition Time: 2024-08-27T09:04:51Z + Message: is petset deleted; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: IsPetsetDeleted + Last Transition Time: 2024-08-27T09:05:01Z + Message: successfully deleted the petSets with orphan propagation policy + Observed Generation: 1 + Reason: OrphanPetSetPods + Status: True + Type: OrphanPetSetPods + Last Transition Time: 2024-08-27T09:05:06Z + Message: get pvc; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetPvc + Last Transition Time: 2024-08-27T09:05:06Z + Message: is pvc patched; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: IsPvcPatched + Last Transition Time: 2024-08-27T09:09:36Z + Message: compare storage; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: CompareStorage + Last Transition Time: 2024-08-27T09:09:41Z + Message: successfully updated controller node PVC sizes + Observed Generation: 1 + Reason: UpdateControllerNodePVCs + Status: True + Type: UpdateControllerNodePVCs + Last Transition Time: 2024-08-27T09:09:47Z + Message: successfully reconciled the Kafka resources + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2024-08-27T09:09:53Z + Message: PetSet is recreated + Observed Generation: 1 + Reason: ReadyPetSets + Status: True + Type: ReadyPetSets + Last Transition Time: 2024-08-27T09:09:53Z + Message: Successfully completed volumeExpansion for kafka + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 8m17s KubeDB Ops-manager Operator Start processing for KafkaOpsRequest: demo/kfops-kafka-prod-2ta9m6 + Normal Starting 8m17s KubeDB Ops-manager Operator Pausing Kafka databse: demo/kafka-prod + Normal Successful 8m17s KubeDB Ops-manager Operator Successfully paused Kafka database: demo/kafka-prod for KafkaOpsRequest: kfops-kafka-prod-2ta9m6 + Warning get pet set; ConditionStatus:True 8m9s KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Warning is petset deleted; ConditionStatus:True 8m9s KubeDB Ops-manager Operator is petset deleted; ConditionStatus:True + Warning get pet set; ConditionStatus:True 8m4s KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Normal OrphanPetSetPods 7m59s KubeDB Ops-manager Operator successfully deleted the petSets with orphan propagation policy + Warning get pvc; ConditionStatus:True 7m54s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning is pvc patched; ConditionStatus:True 7m54s KubeDB Ops-manager Operator is pvc patched; ConditionStatus:True + Warning get pvc; ConditionStatus:True 7m49s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:False 7m49s KubeDB Ops-manager Operator compare storage; ConditionStatus:False + Warning get pvc; ConditionStatus:True 7m44s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 7m39s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 7m34s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 7m29s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 7m24s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 7m19s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 7m14s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 7m9s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 7m4s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 6m59s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 6m54s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 6m49s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 6m44s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 6m39s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 6m34s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 6m29s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 6m24s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 6m19s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 6m14s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 6m9s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 6m4s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:True 6m4s KubeDB Ops-manager Operator compare storage; ConditionStatus:True + Warning get pvc; ConditionStatus:True 5m59s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning is pvc patched; ConditionStatus:True 5m59s KubeDB Ops-manager Operator is pvc patched; ConditionStatus:True + Warning get pvc; ConditionStatus:True 5m54s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:False 5m54s KubeDB Ops-manager Operator compare storage; ConditionStatus:False + Warning get pvc; ConditionStatus:True 5m49s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 5m44s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 5m39s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 5m34s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 5m29s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 5m24s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 5m19s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 5m14s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 5m9s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 5m4s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m59s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m54s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m49s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m44s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m39s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m34s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m29s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m24s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m19s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m14s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m9s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 4m4s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m59s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m54s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m49s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m44s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m39s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m34s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m29s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 3m24s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:True 3m24s KubeDB Ops-manager Operator compare storage; ConditionStatus:True + Normal UpdateControllerNodePVCs 3m19s KubeDB Ops-manager Operator successfully updated controller node PVC sizes + Normal UpdatePetSets 3m12s KubeDB Ops-manager Operator successfully reconciled the Kafka resources + Warning get pet set; ConditionStatus:True 3m7s KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Normal ReadyPetSets 3m7s KubeDB Ops-manager Operator PetSet is recreated + Normal Starting 3m7s KubeDB Ops-manager Operator Resuming Kafka database: demo/kafka-prod + Normal Successful 3m7s KubeDB Ops-manager Operator Successfully resumed Kafka database: demo/kafka-prod for KafkaOpsRequest: kfops-kafka-prod-2ta9m6 +``` + +Now, we are going to verify from the `Petset`, and the `Persistent Volume` whether the volume of the topology cluster has expanded to meet the desired state, Let's check, + +```bash +$ kubectl get petset -n demo kafka-prod-broker -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"2041405440" +$ kubectl get petset -n demo kafka-prod-controller -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"2041405440" +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-128d9138-64da-4021-8a7c-7ca80823e842 1948Mi RWO Delete Bound demo/kafka-prod-data-kafka-prod-controller-1 longhorn 33s +pvc-27fe9102-2e7d-41e0-b77d-729a82c64e21 1948Mi RWO Delete Bound demo/kafka-prod-data-kafka-prod-broker-0 longhorn 51s +pvc-3bb98ba1-9cea-46ad-857f-fc843c265d57 1948Mi RWO Delete Bound demo/kafka-prod-data-kafka-prod-controller-0 longhorn 50s +pvc-68f86aac-33d1-423a-bc56-8a905b546db2 1948Mi RWO Delete Bound demo/kafka-prod-data-kafka-prod-broker-1 longhorn 32s +``` + +The above output verifies that we have successfully autoscaled the volume of the Kafka topology cluster for both broker and controller. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete kafkaopsrequests -n demo kfops-kafka-prod-7qwpbn kfops-kafka-prod-sa4thn +kubectl delete kafkautoscaler -n demo kf-storage-autoscaler-topology +kubectl delete kf -n demo kafka-prod +``` + +## Next Steps + +- Detail concepts of [Kafka object](/docs/guides/kafka/concepts/kafka.md). +- Different Kafka topology clustering modes [here](/docs/guides/kafka/clustering/_index.md). +- Monitor your Kafka database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). + +[//]: # (- Monitor your Kafka database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/kafka/monitoring/using-builtin-prometheus.md).) +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/kafka/autoscaler/storage/overview.md b/docs/guides/kafka/autoscaler/storage/overview.md new file mode 100644 index 0000000000..b1bf56d051 --- /dev/null +++ b/docs/guides/kafka/autoscaler/storage/overview.md @@ -0,0 +1,57 @@ +--- +title: Kafka Storage Autoscaling Overview +menu: + docs_{{ .version }}: + identifier: kf-storage-auto-scaling-overview + name: Overview + parent: kf-storage-auto-scaling + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Kafka Vertical Autoscaling + +This guide will give an overview on how KubeDB Autoscaler operator autoscales the database storage using `kafkaautoscaler` crd. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [Kafka](/docs/guides/kafka/concepts/kafka.md) + - [KafkaAutoscaler](/docs/guides/kafka/concepts/kafkaautoscaler.md) + - [KafkaOpsRequest](/docs/guides/kafka/concepts/kafkaopsrequest.md) + +## How Storage Autoscaling Works + +The following diagram shows how KubeDB Autoscaler operator autoscales the resources of `Kafka` cluster components. Open the image in a new tab to see the enlarged version. + +
+  Storage Auto Scaling process of Kafka +
Fig: Storage Auto Scaling process of Kafka
+
+ + +The Auto Scaling process consists of the following steps: + +1. At first, a user creates a `Kafka` Custom Resource (CR). + +2. `KubeDB` Provisioner operator watches the `Kafka` CR. + +3. When the operator finds a `Kafka` CR, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc. + +- Each PetSet creates a Persistent Volume according to the Volume Claim Template provided in the petset configuration. + +4. Then, in order to set up storage autoscaling of the various components (ie. Combined, Broker, Controller.) of the `Kafka` cluster, the user creates a `KafkaAutoscaler` CRO with desired configuration. + +5. `KubeDB` Autoscaler operator watches the `KafkaAutoscaler` CRO. + +6. `KubeDB` Autoscaler operator continuously watches persistent volumes of the clusters to check if it exceeds the specified usage threshold. +- If the usage exceeds the specified usage threshold, then `KubeDB` Autoscaler operator creates a `KafkaOpsRequest` to expand the storage of the database. + +7. `KubeDB` Ops-manager operator watches the `KafkaOpsRequest` CRO. + +8. Then the `KubeDB` Ops-manager operator will expand the storage of the cluster component as specified on the `KafkaOpsRequest` CRO. + +In the next docs, we are going to show a step by step guide on Autoscaling storage of various Kafka cluster components using `KafkaAutoscaler` CRD. diff --git a/docs/guides/kafka/cli/cli.md b/docs/guides/kafka/cli/cli.md index f1d2831027..acfd2339e5 100644 --- a/docs/guides/kafka/cli/cli.md +++ b/docs/guides/kafka/cli/cli.md @@ -23,21 +23,21 @@ KubeDB comes with its own cli. It is called `kubedb` cli. `kubedb` can be used t `kubectl create` creates a database CRD object in `default` namespace by default. Following command will create a Kafka object as specified in `kafka.yaml`. ```bash -$ kubectl create -f druid-quickstart.yaml +$ kubectl create -f kafka.yaml kafka.kubedb.com/kafka created ``` You can provide namespace as a flag `--namespace`. Provided namespace should match with namespace specified in input file. ```bash -$ kubectl create -f druid-quickstart.yaml --namespace=kube-system +$ kubectl create -f kafka.yaml --namespace=kube-system kafka.kubedb.com/kafka created ``` `kubectl create` command also considers `stdin` as input. ```bash -cat druid-quickstart.yaml | kubectl create -f - +cat kafka.yaml | kubectl create -f - ``` ### How to List Objects @@ -692,14 +692,14 @@ kafka.kubedb.com "kafka" deleted You can also use YAML files to delete objects. The following command will delete an Kafka using the type and name specified in `kafka.yaml`. ```bash -$ kubectl delete -f druid-quickstart.yaml +$ kubectl delete -f kafka.yaml kafka.kubedb.com "kafka" deleted ``` `kubectl delete` command also takes input from `stdin`. ```bash -cat druid-quickstart.yaml | kubectl delete -f - +cat kafka.yaml | kubectl delete -f - ``` To delete database with matching labels, use `--selector` flag. The following command will delete kafka with label `app.kubernetes.io/instance=kafka`. diff --git a/docs/guides/kafka/clustering/topology-cluster/index.md b/docs/guides/kafka/clustering/topology-cluster/index.md index b40de66213..93e7d72e98 100644 --- a/docs/guides/kafka/clustering/topology-cluster/index.md +++ b/docs/guides/kafka/clustering/topology-cluster/index.md @@ -141,22 +141,21 @@ Hence, the cluster is ready to use. Let's check the k8s resources created by the operator on the deployment of Kafka CRO: ```bash -$ kubectl get all,secret,pvc -n demo -l 'app.kubernetes.io/instance=kafka-prod' +$ kubectl get all,petset,secret,pvc -n demo -l 'app.kubernetes.io/instance=kafka-prod' NAME READY STATUS RESTARTS AGE pod/kafka-prod-broker-0 1/1 Running 0 4m10s pod/kafka-prod-broker-1 1/1 Running 0 4m4s pod/kafka-prod-broker-2 1/1 Running 0 3m57s pod/kafka-prod-controller-0 1/1 Running 0 4m8s -pod/kafka-prod-controller-1 1/1 Running 2 (3m35s ago) 4m +pod/kafka-prod-controller-1 1/1 Running 0 4m pod/kafka-prod-controller-2 1/1 Running 0 3m53s -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -service/kafka-prod-broker ClusterIP None 9092/TCP,29092/TCP 4m14s -service/kafka-prod-controller ClusterIP None 9093/TCP 4m14s +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/kafka-prod-pods ClusterIP None 9092/TCP,9093/TCP,29092/TCP 4m14s NAME READY AGE -petset.apps/kafka-prod-broker 3/3 4m10s -petset.apps/kafka-prod-controller 3/3 4m8s +petset.apps.k8s.appscode.com/kafka-prod-broker 3/3 4m10s +petset.apps.k8s.appscode.com/kafka-prod-controller 3/3 4m8s NAME TYPE VERSION AGE appbinding.appcatalog.appscode.com/kafka-prod kubedb.com/kafka 3.6.1 4m8s @@ -202,25 +201,28 @@ ssl.truststore.password=*********** Now, we have to use a bootstrap server to perform operations in a kafka broker. For this demo, we are going to use the http endpoint of the headless service `kafka-prod-broker` as bootstrap server for publishing & consuming messages to kafka brokers. These endpoints are pointing to all the kafka broker pods. We will set an environment variable for the `clientauth.properties` filepath as well. At first, describe the service to get the http endpoints. ```bash -$ kubectl describe svc -n demo kafka-prod-broker -Name: kafka-prod-broker +$ kubectl describe svc -n demo kafka-prod-pods +Name: kafka-prod-pods Namespace: demo Labels: app.kubernetes.io/component=database app.kubernetes.io/instance=kafka-prod app.kubernetes.io/managed-by=kubedb.com app.kubernetes.io/name=kafkas.kubedb.com Annotations: -Selector: app.kubernetes.io/instance=kafka-prod,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=kafkas.kubedb.com,kubedb.com/role=broker +Selector: app.kubernetes.io/instance=kafka-prod,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=kafkas.kubedb.com Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: None IPs: None -Port: http 9092/TCP -TargetPort: http/TCP +Port: broker 9092/TCP +TargetPort: broker/TCP Endpoints: 10.244.0.33:9092,10.244.0.37:9092,10.244.0.41:9092 -Port: internal 29092/TCP -TargetPort: internal/TCP +Port: controller 9093/TCP +TargetPort: controller/TCP +Endpoints: 10.244.0.16:9093,10.244.0.20:9093,10.244.0.24:9093 +Port: local 29092/TCP +TargetPort: local/TCP Endpoints: 10.244.0.33:29092,10.244.0.37:29092,10.244.0.41:29092 Session Affinity: None Events: @@ -229,7 +231,7 @@ Events: Use the `http endpoints` and `clientauth.properties` file to set environment variables. These environment variables will be useful for handling console command operations easily. ```bash -root@kafka-prod-broker-0:~# export SERVER="10.244.0.100:9092,10.244.0.104:9092,10.244.0.108:9092" +root@kafka-prod-broker-0:~# export SERVER=" 10.244.0.33:9092,10.244.0.37:9092,10.244.0.41:9092" root@kafka-prod-broker-0:~# export CLIENTAUTHCONFIG="$HOME/config/clientauth.properties" ``` @@ -243,17 +245,17 @@ LeaderEpoch: 15 HighWatermark: 1820 MaxFollowerLag: 0 MaxFollowerLagTimeMs: 159 -CurrentVoters: [0,1,2] -CurrentObservers: [3,4,5] +CurrentVoters: [1000,1001,1002] +CurrentObservers: [0,1,2] ``` It will show you important metadata information like clusterID, current leader ID, broker IDs which are participating in leader election voting and IDs of those brokers who are observers. It is important to mention that each broker is assigned a numeric ID which is called its broker ID. The ID is assigned sequentially with respect to the host pod name. In this case, The pods assigned broker IDs are as follows: | Pods | Broker ID | |---------------------|:---------:| -| kafka-prod-broker-0 | 3 | -| kafka-prod-broker-1 | 4 | -| kafka-prod-broker-2 | 5 | +| kafka-prod-broker-0 | 0 | +| kafka-prod-broker-1 | 1 | +| kafka-prod-broker-2 | 2 | Let's create a topic named `sample` with 1 partitions and a replication factor of 1. Describe the topic once it's created. You will see the leader ID for each partition and their replica IDs along with in-sync-replicas(ISR). @@ -264,12 +266,12 @@ Created topic sample. root@kafka-prod-broker-0:~# kafka-topics.sh --command-config $CLIENTAUTHCONFIG --describe --topic sample --bootstrap-server localhost:9092 Topic: sample TopicId: mqlupmBhQj6OQxxG9m51CA PartitionCount: 1 ReplicationFactor: 1 Configs: segment.bytes=1073741824 - Topic: sample Partition: 0 Leader: 4 Replicas: 4 Isr: 4 + Topic: sample Partition: 0 Leader: 1 Replicas: 1 Isr: 1 ``` Now, we are going to start a producer and a consumer for topic `sample` using console. Let's use this current terminal for producing messages and open a new terminal for consuming messages. Let's set the environment variables for bootstrap server and the configuration file in consumer terminal also. -From the topic description we can see that the leader partition for partition 0 is 4 that is `kafka-prod-broker-1`. If we produce messages to `kafka-prod-broker-1` broker(brokerID=4) it will store those messages in partition 0. Let's produce messages in the producer terminal and consume them from the consumer terminal. +From the topic description we can see that the leader partition for partition 0 is 1 that is `kafka-prod-broker-1`. If we produce messages to `kafka-prod-broker-1` broker(brokerID=1) it will store those messages in partition 0. Let's produce messages in the producer terminal and consume them from the consumer terminal. ```bash root@kafka-prod-broker-1:~# kafka-console-producer.sh --producer.config $CLIENTAUTHCONFIG --topic sample --request-required-acks all --bootstrap-server localhost:9092 @@ -290,7 +292,6 @@ I hope it's received by console consumer Notice that, messages are coming to the consumer as you continue sending messages via producer. So, we have created a kafka topic and used kafka console producer and consumer to test message publishing and consuming successfully. - ## Cleaning Up TO clean up the k8s resources created by this tutorial, run: diff --git a/docs/guides/kafka/concepts/appbinding.md b/docs/guides/kafka/concepts/appbinding.md index 11618203e6..4cfcc77d09 100644 --- a/docs/guides/kafka/concepts/appbinding.md +++ b/docs/guides/kafka/concepts/appbinding.md @@ -5,7 +5,7 @@ menu: identifier: kf-appbinding-concepts name: AppBinding parent: kf-concepts-kafka - weight: 35 + weight: 60 menu_name: docs_{{ .version }} section_menu_id: guides --- diff --git a/docs/guides/kafka/concepts/connectcluster.md b/docs/guides/kafka/concepts/connectcluster.md index 9cc61000d0..855f60e50a 100644 --- a/docs/guides/kafka/concepts/connectcluster.md +++ b/docs/guides/kafka/concepts/connectcluster.md @@ -5,7 +5,7 @@ menu: identifier: kf-connectcluster-concepts name: ConnectCluster parent: kf-concepts-kafka - weight: 15 + weight: 25 menu_name: docs_{{ .version }} section_menu_id: guides --- diff --git a/docs/guides/kafka/concepts/connector.md b/docs/guides/kafka/concepts/connector.md index e2908fb967..8f132a49f0 100644 --- a/docs/guides/kafka/concepts/connector.md +++ b/docs/guides/kafka/concepts/connector.md @@ -5,7 +5,7 @@ menu: identifier: kf-connector-concepts name: Connector parent: kf-concepts-kafka - weight: 20 + weight: 30 menu_name: docs_{{ .version }} section_menu_id: guides --- @@ -70,8 +70,8 @@ Deletion policy `WipeOut` will delete the connector from the ConnectCluster when ## Next Steps -- Learn how to use KubeDB to run a Apache Kafka cluster [here](/docs/guides/kafka/quickstart/overview/kafka/index.md). -- Learn how to use KubeDB to run a Apache Kafka Connect cluster [here](/docs/guides/kafka/quickstart/overview/connectcluster/index.md). +- Learn how to use KubeDB to run Apache Kafka cluster [here](/docs/guides/kafka/quickstart/kafka/index.md). +- Learn how to use KubeDB to run Apache Kafka Connect cluster [here](/docs/guides/kafka/connectcluster/overview.md). - Detail concepts of [KafkaConnectorVersion object](/docs/guides/kafka/concepts/kafkaconnectorversion.md). - Learn to use KubeDB managed Kafka objects using [CLIs](/docs/guides/kafka/cli/cli.md). - Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/kafka/concepts/kafka.md b/docs/guides/kafka/concepts/kafka.md index 361d4f65f5..048a28b612 100644 --- a/docs/guides/kafka/concepts/kafka.md +++ b/docs/guides/kafka/concepts/kafka.md @@ -302,7 +302,8 @@ NB. If `spec.topology` is set, then `spec.storage` needs to be empty. Instead us ### spec.monitor Kafka managed by KubeDB can be monitored with Prometheus operator out-of-the-box. To learn more, -- [Monitor Apache with Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md) +- [Monitor Apache Kafka with Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md) +- [Monitor Apache Kafka with Built-in Prometheus](/docs/guides/kafka/monitoring/using-builtin-prometheus.md) ### spec.podTemplate diff --git a/docs/guides/kafka/concepts/kafkaautoscaler.md b/docs/guides/kafka/concepts/kafkaautoscaler.md new file mode 100644 index 0000000000..576ceb15b1 --- /dev/null +++ b/docs/guides/kafka/concepts/kafkaautoscaler.md @@ -0,0 +1,164 @@ +--- +title: KafkaAutoscaler CRD +menu: + docs_{{ .version }}: + identifier: kf-autoscaler-concepts + name: KafkaAutoscaler + parent: kf-concepts-kafka + weight: 20 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# KafkaAutoscaler + +## What is KafkaAutoscaler + +`KafkaAutoscaler` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration for autoscaling [Kafka](https://kafka.apache.org/) compute resources and storage of database components in a Kubernetes native way. + +## KafkaAutoscaler CRD Specifications + +Like any official Kubernetes resource, a `KafkaAutoscaler` has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections. + +Here, some sample `KafkaAutoscaler` CROs for autoscaling different components of database is given below: + +**Sample `KafkaAutoscaler` for combined cluster:** + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: KafkaAutoscaler +metadata: + name: kf-autoscaler-combined + namespace: demo +spec: + databaseRef: + name: kafka-dev + opsRequestOptions: + timeout: 3m + apply: IfReady + compute: + node: + trigger: "On" + podLifeTimeThreshold: 24h + minAllowed: + cpu: 250m + memory: 350Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" + resourceDiffPercentage: 10 + storage: + node: + expansionMode: "Online" + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 +``` + +**Sample `KafkaAutoscaler` for topology cluster:** + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: KafkaAutoscaler +metadata: + name: kf-autoscaler-topology + namespace: demo +spec: + databaseRef: + name: kafka-prod + opsRequestOptions: + timeout: 3m + apply: IfReady + compute: + broker: + trigger: "On" + podLifeTimeThreshold: 24h + minAllowed: + cpu: 200m + memory: 300Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" + resourceDiffPercentage: 10 + controller: + trigger: "On" + podLifeTimeThreshold: 24h + minAllowed: + cpu: 200m + memory: 300Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" + resourceDiffPercentage: 10 + storage: + broker: + expansionMode: "Online" + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 + controller: + expansionMode: "Online" + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 +``` + +Here, we are going to describe the various sections of a `KafkaAutoscaler` crd. + +A `KafkaAutoscaler` object has the following fields in the `spec` section. + +### spec.databaseRef + +`spec.databaseRef` is a required field that point to the [Kafka](/docs/guides/kafka/concepts/kafka.md) object for which the autoscaling will be performed. This field consists of the following sub-field: + +- **spec.databaseRef.name :** specifies the name of the [Kafka](/docs/guides/kafka/concepts/kafka.md) object. + +### spec.opsRequestOptions +These are the options to pass in the internally created opsRequest CRO. `opsRequestOptions` has two fields. + +### spec.compute + +`spec.compute` specifies the autoscaling configuration for the compute resources i.e. cpu and memory of the database components. This field consists of the following sub-field: + +- `spec.compute.node` indicates the desired compute autoscaling configuration for a combined Kafka cluster. +- `spec.compute.broker` indicates the desired compute autoscaling configuration for broker of a topology Kafka database. +- `spec.compute.controller` indicates the desired compute autoscaling configuration for controller of a topology Kafka database. + + +All of them has the following sub-fields: + +- `trigger` indicates if compute autoscaling is enabled for this component of the database. If "On" then compute autoscaling is enabled. If "Off" then compute autoscaling is disabled. +- `minAllowed` specifies the minimal amount of resources that will be recommended, default is no minimum. +- `maxAllowed` specifies the maximum amount of resources that will be recommended, default is no maximum. +- `controlledResources` specifies which type of compute resources (cpu and memory) are allowed for autoscaling. Allowed values are "cpu" and "memory". +- `containerControlledValues` specifies which resource values should be controlled. Allowed values are "RequestsAndLimits" and "RequestsOnly". +- `resourceDiffPercentage` specifies the minimum resource difference between recommended value and the current value in percentage. If the difference percentage is greater than this value than autoscaling will be triggered. +- `podLifeTimeThreshold` specifies the minimum pod lifetime of at least one of the pods before triggering autoscaling. + +There are two more fields, those are only specifiable for the percona variant inMemory databases. +- `inMemoryStorage.UsageThresholdPercentage` If db uses more than usageThresholdPercentage of the total memory, memoryStorage should be increased. +- `inMemoryStorage.ScalingFactorPercentage` If db uses more than usageThresholdPercentage of the total memory, memoryStorage should be increased by this given scaling percentage. + +### spec.storage + +`spec.compute` specifies the autoscaling configuration for the storage resources of the database components. This field consists of the following sub-field: + +- `spec.compute.node` indicates the desired storage autoscaling configuration for a combined Kafka cluster. +- `spec.compute.broker` indicates the desired storage autoscaling configuration for broker of a combined Kafka cluster. +- `spec.compute.controller` indicates the desired storage autoscaling configuration for controller of a topology Kafka cluster. + + +All of them has the following sub-fields: + +- `trigger` indicates if storage autoscaling is enabled for this component of the database. If "On" then storage autoscaling is enabled. If "Off" then storage autoscaling is disabled. +- `usageThreshold` indicates usage percentage threshold, if the current storage usage exceeds then storage autoscaling will be triggered. +- `scalingThreshold` indicates the percentage of the current storage that will be scaled. +- `expansionMode` indicates the volume expansion mode. diff --git a/docs/guides/kafka/concepts/kafkaconnectorversion.md b/docs/guides/kafka/concepts/kafkaconnectorversion.md index 5359845b13..fe06dde4ec 100644 --- a/docs/guides/kafka/concepts/kafkaconnectorversion.md +++ b/docs/guides/kafka/concepts/kafkaconnectorversion.md @@ -5,7 +5,7 @@ menu: identifier: kf-kafkaconnectorversion-concepts name: KafkaConnectorVersion parent: kf-concepts-kafka - weight: 30 + weight: 50 menu_name: docs_{{ .version }} section_menu_id: guides --- @@ -88,4 +88,4 @@ helm upgrade -i kubedb oci://ghcr.io/appscode-charts/kubedb \ - Learn about Kafka CRD [here](/docs/guides/kafka/concepts/kafka.md). - Learn about ConnectCluster CRD [here](/docs/guides/kafka/concepts/connectcluster.md). -- Deploy your first ConnectCluster with KubeDB by following the guide [here](/docs/guides/kafka/quickstart/overview/connectcluster/index.md). +- Deploy your first ConnectCluster with KubeDB by following the guide [here](/docs/guides/kafka/connectcluster/overview.md). diff --git a/docs/guides/kafka/concepts/kafkaopsrequest.md b/docs/guides/kafka/concepts/kafkaopsrequest.md new file mode 100644 index 0000000000..a5275cb57b --- /dev/null +++ b/docs/guides/kafka/concepts/kafkaopsrequest.md @@ -0,0 +1,622 @@ +--- +title: KafkaOpsRequests CRD +menu: + docs_{{ .version }}: + identifier: kf-opsrequest-concepts + name: KafkaOpsRequest + parent: kf-concepts-kafka + weight: 15 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + + +> New to KubeDB? Please start [here](/docs/README.md). + +# KafkaOpsRequest + +## What is KafkaOpsRequest + +`KafkaOpsRequest` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration for [Kafka](https://kafka.apache.org/) administrative operations like database version updating, horizontal scaling, vertical scaling etc. in a Kubernetes native way. + +## KafkaOpsRequest CRD Specifications + +Like any official Kubernetes resource, a `KafkaOpsRequest` has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections. + +Here, some sample `KafkaOpsRequest` CRs for different administrative operations is given below: + +**Sample `KafkaOpsRequest` for updating database:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: update-version + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: kafka-prod + updateVersion: + targetVersion: 3.6.1 +status: + conditions: + - lastTransitionTime: "2024-07-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `KafkaOpsRequest` Objects for Horizontal Scaling of different component of the database:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-hscale-combined + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: kafka-dev + horizontalScaling: + node: 3 +status: + conditions: + - lastTransitionTime: "2024-07-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-hscale-down-topology + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: kafka-prod + horizontalScaling: + topology: + broker: 2 + controller: 2 +status: + conditions: + - lastTransitionTime: "2024-07-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `KafkaOpsRequest` Objects for Vertical Scaling of different component of the database:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-vscale-combined + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: kafka-dev + verticalScaling: + node: + resources: + requests: + memory: "1.5Gi" + cpu: "0.7" + limits: + memory: "2Gi" + cpu: "1" +status: + conditions: + - lastTransitionTime: "2024-07-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-vscale-topology + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: kafka-prod + verticalScaling: + broker: + resources: + requests: + memory: "1.5Gi" + cpu: "0.7" + limits: + memory: "2Gi" + cpu: "1" + controller: + resources: + requests: + memory: "1.5Gi" + cpu: "0.7" + limits: + memory: "2Gi" + cpu: "1" +status: + conditions: + - lastTransitionTime: "2024-07-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `KafkaOpsRequest` Objects for Reconfiguring different kafka mode:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-reconfiugre-combined + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: kafka-dev + configuration: + applyConfig: + server.properties: | + log.retention.hours=100 + default.replication.factor=2 +status: + conditions: + - lastTransitionTime: "2024-07-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-reconfiugre-topology + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: kafka-prod + configuration: + applyConfig: + broker.properties: | + log.retention.hours=100 + default.replication.factor=2 + controller.properties: | + metadata.log.dir=/var/log/kafka/metadata-custom +status: + conditions: + - lastTransitionTime: "2024-07-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-reconfiugre-combined + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: kafka-dev + configuration: + configSecret: + name: new-configsecret-combined +status: + conditions: + - lastTransitionTime: "2024-07-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-reconfiugre-topology + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: kafka-prod + configuration: + configSecret: + name: new-configsecret-topology +status: + conditions: + - lastTransitionTime: "2024-07-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `KafkaOpsRequest` Objects for Volume Expansion of different database components:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-volume-exp-combined + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: kafka-dev + volumeExpansion: + mode: "Online" + node: 2Gi +status: + conditions: + - lastTransitionTime: "2024-07-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-volume-exp-topology + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: kafka-prod + volumeExpansion: + mode: "Online" + broker: 2Gi + controller: 2Gi +status: + conditions: + - lastTransitionTime: "2024-07-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `KafkaOpsRequest` Objects for Reconfiguring TLS of the database:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-add-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: kafka-prod + tls: + issuerRef: + name: kf-issuer + kind: Issuer + apiGroup: "cert-manager.io" + certificates: + - alias: client + emailAddresses: + - abc@appscode.com +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-rotate + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: kafka-dev + tls: + rotateCertificates: true +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-change-issuer + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: kafka-prod + tls: + issuerRef: + name: kf-new-issuer + kind: Issuer + apiGroup: "cert-manager.io" +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-remove + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: kafka-prod + tls: + remove: true +``` + +Here, we are going to describe the various sections of a `KafkaOpsRequest` crd. + +A `KafkaOpsRequest` object has the following fields in the `spec` section. + +### spec.databaseRef + +`spec.databaseRef` is a required field that point to the [Kafka](/docs/guides/kafka/concepts/kafka.md) object for which the administrative operations will be performed. This field consists of the following sub-field: + +- **spec.databaseRef.name :** specifies the name of the [Kafka](/docs/guides/kafka/concepts/kafka.md) object. + +### spec.type + +`spec.type` specifies the kind of operation that will be applied to the database. Currently, the following types of operations are allowed in `KafkaOpsRequest`. + +- `UpdateVersion` +- `HorizontalScaling` +- `VerticalScaling` +- `VolumeExpansion` +- `Reconfigure` +- `ReconfigureTLS` +- `Restart` + +> You can perform only one type of operation on a single `KafkaOpsRequest` CR. For example, if you want to update your database and scale up its replica then you have to create two separate `KafkaOpsRequest`. At first, you have to create a `KafkaOpsRequest` for updating. Once it is completed, then you can create another `KafkaOpsRequest` for scaling. + +### spec.updateVersion + +If you want to update you Kafka version, you have to specify the `spec.updateVersion` section that specifies the desired version information. This field consists of the following sub-field: + +- `spec.updateVersion.targetVersion` refers to a [KafkaVersion](/docs/guides/kafka/concepts/kafkaversion.md) CR that contains the Kafka version information where you want to update. + +> You can only update between Kafka versions. KubeDB does not support downgrade for Kafka. + +### spec.horizontalScaling + +If you want to scale-up or scale-down your Kafka cluster or different components of it, you have to specify `spec.horizontalScaling` section. This field consists of the following sub-field: + +- `spec.horizontalScaling.node` indicates the desired number of nodes for Kafka combined cluster after scaling. For example, if your cluster currently has 4 replica with combined node, and you want to add additional 2 nodes then you have to specify 6 in `spec.horizontalScaling.node` field. Similarly, if you want to remove one node from the cluster, you have to specify 3 in `spec.horizontalScaling.node` field. +- `spec.horizontalScaling.topology` indicates the configuration of topology nodes for Kafka topology cluster after scaling. This field consists of the following sub-field: + - `spec.horizontalScaling.topoloy.broker` indicates the desired number of broker nodes for Kafka topology cluster after scaling. + - `spec.horizontalScaling.topology.controller` indicates the desired number of controller nodes for Kafka topology cluster after scaling. + +> If the reference kafka object is combined cluster, then you can only specify `spec.horizontalScaling.node` field. If the reference kafka object is topology cluster, then you can only specify `spec.horizontalScaling.topology` field. You can not specify both fields at the same time. + +### spec.verticalScaling + +`spec.verticalScaling` is a required field specifying the information of `Kafka` resources like `cpu`, `memory` etc that will be scaled. This field consists of the following sub-fields: + +- `spec.verticalScaling.node` indicates the desired resources for combined Kafka cluster after scaling. +- `spec.verticalScaling.broker` indicates the desired resources for broker of Kafka topology cluster after scaling. +- `spec.verticalScaling.controller` indicates the desired resources for controller of Kafka topology cluster after scaling. + +> If the reference kafka object is combined cluster, then you can only specify `spec.verticalScaling.node` field. If the reference kafka object is topology cluster, then you can only specify `spec.verticalScaling.broker` or `spec.verticalScaling.controller` or both fields. You can not specify `spec.verticalScaling.node` field with any other fields at the same time, but you can specify `spec.verticalScaling.broker` and `spec.verticalScaling.controller` fields at the same time. + +All of them has the below structure: + +```yaml +requests: + memory: "200Mi" + cpu: "0.1" +limits: + memory: "300Mi" + cpu: "0.2" +``` + +Here, when you specify the resource request, the scheduler uses this information to decide which node to place the container of the Pod on and when you specify a resource limit for the container, the `kubelet` enforces those limits so that the running container is not allowed to use more of that resource than the limit you set. You can found more details from [here](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/). + +### spec.volumeExpansion + +> To use the volume expansion feature the storage class must support volume expansion + +If you want to expand the volume of your Kafka cluster or different components of it, you have to specify `spec.volumeExpansion` section. This field consists of the following sub-field: + +- `spec.mode` specifies the volume expansion mode. Supported values are `Online` & `Offline`. The default is `Online`. +- `spec.volumeExpansion.node` indicates the desired size for the persistent volume of a combined Kafka cluster. +- `spec.volumeExpansion.broker` indicates the desired size for the persistent volume for broker of a Kafka topology cluster. +- `spec.volumeExpansion.controller` indicates the desired size for the persistent volume for controller of a Kafka topology cluster. + +> If the reference kafka object is combined cluster, then you can only specify `spec.volumeExpansion.node` field. If the reference kafka object is topology cluster, then you can only specify `spec.volumeExpansion.broker` or `spec.volumeExpansion.controller` or both fields. You can not specify `spec.volumeExpansion.node` field with any other fields at the same time, but you can specify `spec.volumeExpansion.broker` and `spec.volumeExpansion.controller` fields at the same time. + +All of them refer to [Quantity](https://v1-22.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#quantity-resource-core) types of Kubernetes. + +Example usage of this field is given below: + +```yaml +spec: + volumeExpansion: + node: "2Gi" +``` + +This will expand the volume size of all the combined nodes to 2 GB. + +### spec.configuration + +If you want to reconfigure your Running Kafka cluster or different components of it with new custom configuration, you have to specify `spec.configuration` section. This field consists of the following sub-field: + +- `spec.configuration.configSecret` points to a secret in the same namespace of a Kafka resource, which contains the new custom configurations. If there are any configSecret set before in the database, this secret will replace it. The value of the field `spec.stringData` of the secret like below: +```yaml +server.properties: | + default.replication.factor=3 + offsets.topic.replication.factor=3 + log.retention.hours=100 +broker.properties: | + default.replication.factor=3 + offsets.topic.replication.factor=3 + log.retention.hours=100 +controller.properties: | + default.replication.factor=3 + offsets.topic.replication.factor=3 + log.retention.hours=100 +``` +> If you want to reconfigure a combined Kafka cluster, then you can only specify `server.properties` field. If you want to reconfigure a topology Kafka cluster, then you can specify `broker.properties` or `controller.properties` or both fields. You can not specify `server.properties` field with any other fields at the same time, but you can specify `broker.properties` and `controller.properties` fields at the same time. + +- `applyConfig` contains the new custom config as a string which will be merged with the previous configuration. + +- `applyConfig` is a map where key supports 3 values, namely `server.properties`, `broker.properties`, `controller.properties`. And value represents the corresponding configurations. + +```yaml + applyConfig: + server.properties: | + default.replication.factor=3 + offsets.topic.replication.factor=3 + log.retention.hours=100 + broker.properties: | + default.replication.factor=3 + offsets.topic.replication.factor=3 + log.retention.hours=100 + controller.properties: | + metadata.log.dir=/var/log/kafka/metadata-custom +``` + +- `removeCustomConfig` is a boolean field. Specify this field to true if you want to remove all the custom configuration from the deployed kafka cluster. + +### spec.tls + +If you want to reconfigure the TLS configuration of your Kafka i.e. add TLS, remove TLS, update issuer/cluster issuer or Certificates and rotate the certificates, you have to specify `spec.tls` section. This field consists of the following sub-field: + +- `spec.tls.issuerRef` specifies the issuer name, kind and api group. +- `spec.tls.certificates` specifies the certificates. You can learn more about this field from [here](/docs/guides/kafka/concepts/kafka.md#spectls). +- `spec.tls.rotateCertificates` specifies that we want to rotate the certificate of this kafka. +- `spec.tls.remove` specifies that we want to remove tls from this kafka. + +### spec.timeout +As we internally retry the ops request steps multiple times, This `timeout` field helps the users to specify the timeout for those steps of the ops request (in second). +If a step doesn't finish within the specified timeout, the ops request will result in failure. + +### spec.apply +This field controls the execution of obsRequest depending on the database state. It has two supported values: `Always` & `IfReady`. +Use IfReady, if you want to process the opsRequest only when the database is Ready. And use Always, if you want to process the execution of opsReq irrespective of the Database state. + +### KafkaOpsRequest `Status` + +`.status` describes the current state and progress of a `KafkaOpsRequest` operation. It has the following fields: + +### status.phase + +`status.phase` indicates the overall phase of the operation for this `KafkaOpsRequest`. It can have the following three values: + +| Phase | Meaning | +|-------------|----------------------------------------------------------------------------------| +| Successful | KubeDB has successfully performed the operation requested in the KafkaOpsRequest | +| Progressing | KubeDB has started the execution of the applied KafkaOpsRequest | +| Failed | KubeDB has failed the operation requested in the KafkaOpsRequest | +| Denied | KubeDB has denied the operation requested in the KafkaOpsRequest | +| Skipped | KubeDB has skipped the operation requested in the KafkaOpsRequest | + +Important: Ops-manager Operator can skip an opsRequest, only if its execution has not been started yet & there is a newer opsRequest applied in the cluster. `spec.type` has to be same as the skipped one, in this case. + +### status.observedGeneration + +`status.observedGeneration` shows the most recent generation observed by the `KafkaOpsRequest` controller. + +### status.conditions + +`status.conditions` is an array that specifies the conditions of different steps of `KafkaOpsRequest` processing. Each condition entry has the following fields: + +- `types` specifies the type of the condition. KafkaOpsRequest has the following types of conditions: + +| Type | Meaning | +|-------------------------------|---------------------------------------------------------------------------| +| `Progressing` | Specifies that the operation is now in the progressing state | +| `Successful` | Specifies such a state that the operation on the database was successful. | +| `HaltDatabase` | Specifies such a state that the database is halted by the operator | +| `ResumeDatabase` | Specifies such a state that the database is resumed by the operator | +| `Failed` | Specifies such a state that the operation on the database failed. | +| `StartingBalancer` | Specifies such a state that the balancer has successfully started | +| `StoppingBalancer` | Specifies such a state that the balancer has successfully stopped | +| `UpdateShardImage` | Specifies such a state that the Shard Images has been updated | +| `UpdateReplicaSetImage` | Specifies such a state that the Replicaset Image has been updated | +| `UpdateConfigServerImage` | Specifies such a state that the ConfigServer Image has been updated | +| `UpdateMongosImage` | Specifies such a state that the Mongos Image has been updated | +| `UpdatePetSetResources` | Specifies such a state that the Petset resources has been updated | +| `UpdateShardResources` | Specifies such a state that the Shard resources has been updated | +| `UpdateReplicaSetResources` | Specifies such a state that the Replicaset resources has been updated | +| `UpdateConfigServerResources` | Specifies such a state that the ConfigServer resources has been updated | +| `UpdateMongosResources` | Specifies such a state that the Mongos resources has been updated | +| `ScaleDownReplicaSet` | Specifies such a state that the scale down operation of replicaset | +| `ScaleUpReplicaSet` | Specifies such a state that the scale up operation of replicaset | +| `ScaleUpShardReplicas` | Specifies such a state that the scale up operation of shard replicas | +| `ScaleDownShardReplicas` | Specifies such a state that the scale down operation of shard replicas | +| `ScaleDownConfigServer` | Specifies such a state that the scale down operation of config server | +| `ScaleUpConfigServer` | Specifies such a state that the scale up operation of config server | +| `ScaleMongos` | Specifies such a state that the scale down operation of replicaset | +| `VolumeExpansion` | Specifies such a state that the volume expansion operaton of the database | +| `ReconfigureReplicaset` | Specifies such a state that the reconfiguration of replicaset nodes | +| `ReconfigureMongos` | Specifies such a state that the reconfiguration of mongos nodes | +| `ReconfigureShard` | Specifies such a state that the reconfiguration of shard nodes | +| `ReconfigureConfigServer` | Specifies such a state that the reconfiguration of config server nodes | + +- The `status` field is a string, with possible values `True`, `False`, and `Unknown`. + - `status` will be `True` if the current transition succeeded. + - `status` will be `False` if the current transition failed. + - `status` will be `Unknown` if the current transition was denied. +- The `message` field is a human-readable message indicating details about the condition. +- The `reason` field is a unique, one-word, CamelCase reason for the condition's last transition. +- The `lastTransitionTime` field provides a timestamp for when the operation last transitioned from one state to another. +- The `observedGeneration` shows the most recent condition transition generation observed by the controller. diff --git a/docs/guides/kafka/concepts/kafkaversion.md b/docs/guides/kafka/concepts/kafkaversion.md index 405eecb185..ffcf5ea27a 100644 --- a/docs/guides/kafka/concepts/kafkaversion.md +++ b/docs/guides/kafka/concepts/kafkaversion.md @@ -5,7 +5,7 @@ menu: identifier: kf-catalog-concepts name: KafkaVersion parent: kf-concepts-kafka - weight: 25 + weight: 45 menu_name: docs_{{ .version }} section_menu_id: guides --- @@ -115,4 +115,4 @@ helm upgrade -i kubedb oci://ghcr.io/appscode-charts/kubedb \ ## Next Steps - Learn about Kafka CRD [here](/docs/guides/kafka/concepts/kafka.md). -- Deploy your first Kafka database with KubeDB by following the guide [here](/docs/guides/kafka/quickstart/overview/kafka/index.md). +- Deploy your first Kafka database with KubeDB by following the guide [here](/docs/guides/kafka/quickstart/kafka/index.md). diff --git a/docs/guides/kafka/concepts/restproxy.md b/docs/guides/kafka/concepts/restproxy.md new file mode 100644 index 0000000000..9f43a25ad0 --- /dev/null +++ b/docs/guides/kafka/concepts/restproxy.md @@ -0,0 +1,163 @@ +--- +title: RestProxy CRD +menu: + docs_{{ .version }}: + identifier: kf-restproxy-concepts + name: RestProxy + parent: kf-concepts-kafka + weight: 35 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# RestProxy + +## What is RestProxy + +`RestProxy` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [RestProxy](https://www.apicur.io/registry/) in a Kubernetes native way. You only need to describe the desired configuration in a `RestProxy` object, and the KubeDB operator will create Kubernetes objects in the desired state for you. + +## RestProxy Spec + +As with all other Kubernetes objects, a RestProxy needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. Below is an example RestProxy object. + +```yaml +apiVersion: kafka.kubedb.com/v1alpha1 +kind: RestProxy +metadata: + name: restproxy + namespace: demo +spec: + version: 3.15.0 + healthChecker: + failureThreshold: 3 + periodSeconds: 20 + timeoutSeconds: 10 + replicas: 3 + kafkaRef: + name: kafka + namespace: demo + podTemplate: + metadata: + annotations: + passMe: ToDatabasePod + labels: + thisLabel: willGoToPod + controller: + annotations: + passMe: ToPetSet + labels: + thisLabel: willGoToSts + deletionPolicy: WipeOut +``` + +### spec.version + +`spec.version` is a required field specifying the name of the [SchemaRegistryVersion](/docs/guides/kafka/concepts/schemaregistryversion.md) CR where the docker images are specified. Currently, when you install KubeDB, it creates the following `SchemaRegistryVersion` resources, +- `2.5.11.final` +- `3.15.0` + +### spec.replicas + +`spec.replicas` the number of instances in Rest Proxy. + +KubeDB uses `PodDisruptionBudget` to ensure that majority of these replicas are available during [voluntary disruptions](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#voluntary-and-involuntary-disruptions) so that quorum is maintained. + +### spec.kafkaRef + +`spec.kafkaRef` is a optional field that specifies the name and namespace of the appbinding for `Kafka` object that the `RestProxy` object is associated with. +```yaml +kafkaRef: + name: + namespace: +``` + +### spec.podTemplate + +KubeDB allows providing a template for pod through `spec.podTemplate`. KubeDB operator will pass the information provided in `spec.podTemplate` to the PetSet created for RestProxy. + +KubeDB accept following fields to set in `spec.podTemplate:` + +- metadata: + - annotations (pod's annotation) + - labels (pod's labels) +- controller: + - annotations (petset's annotation) + - labels (petset's labels) +- spec: + - volumes + - initContainers + - containers + - imagePullSecrets + - nodeSelector + - affinity + - serviceAccountName + - schedulerName + - tolerations + - priorityClassName + - priority + - securityContext + - livenessProbe + - readinessProbe + - lifecycle + +You can check out the full list [here](https://github.com/kmodules/offshoot-api/blob/39bf8b2/api/v2/types.go#L44-L279). Uses of some field of `spec.podTemplate` is described below, + +#### spec.podTemplate.spec.nodeSelector + +`spec.podTemplate.spec.nodeSelector` is an optional field that specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). To learn more, see [here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) . + +#### spec.podTemplate.spec.resources + +`spec.podTemplate.spec.resources` is an optional field. This can be used to request compute resources required by the database pods. To learn more, visit [here](http://kubernetes.io/docs/user-guide/compute-resources/). + +### spec.serviceTemplates + +You can also provide template for the services created by KubeDB operator for Kafka cluster through `spec.serviceTemplates`. This will allow you to set the type and other properties of the services. + +KubeDB allows following fields to set in `spec.serviceTemplates`: +- `alias` represents the identifier of the service. It has the following possible value: + - `stats` is used for the exporter service identification. +- metadata: + - labels + - annotations +- spec: + - type + - ports + - clusterIP + - externalIPs + - loadBalancerIP + - loadBalancerSourceRanges + - externalTrafficPolicy + - healthCheckNodePort + - sessionAffinityConfig + +See [here](https://github.com/kmodules/offshoot-api/blob/kubernetes-1.21.1/api/v1/types.go#L237) to understand these fields in detail. + +### spec.deletionPolicy + +`spec.deletionPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `RestProxy` crd or which resources KubeDB should keep or delete when you delete `RestProxy` crd. KubeDB provides following four deletion policies: + +- Delete +- DoNotTerminate +- WipeOut + +When `deletionPolicy` is `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` feature. If admission webhook is enabled, `DoNotTerminate` prevents users from deleting the database as long as the `spec.deletionPolicy` is set to `DoNotTerminate`. + +## spec.healthChecker +It defines the attributes for the health checker. +- `spec.healthChecker.periodSeconds` specifies how often to perform the health check. +- `spec.healthChecker.timeoutSeconds` specifies the number of seconds after which the probe times out. +- `spec.healthChecker.failureThreshold` specifies minimum consecutive failures for the healthChecker to be considered failed. +- `spec.healthChecker.disableWriteCheck` specifies whether to disable the writeCheck or not. + +Know details about KubeDB Health checking from this [blog post](https://appscode.com/blog/post/kubedb-health-checker/). + +## Next Steps + +- Learn how to use KubeDB to run a Apache Kafka Connect cluster [here](/docs/guides/kafka/README.md). +- Monitor your RestProxy with KubeDB using [`out-of-the-box` Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). +- Detail concepts of [KafkaConnectorVersion object](/docs/guides/kafka/concepts/kafkaconnectorversion.md). +- Learn to use KubeDB managed Kafka objects using [CLIs](/docs/guides/kafka/cli/cli.md). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/kafka/concepts/schemaregistry.md b/docs/guides/kafka/concepts/schemaregistry.md new file mode 100644 index 0000000000..7d63ab3910 --- /dev/null +++ b/docs/guides/kafka/concepts/schemaregistry.md @@ -0,0 +1,163 @@ +--- +title: SchemaRegistry CRD +menu: + docs_{{ .version }}: + identifier: kf-schemaregistry-concepts + name: SchemaRegistry + parent: kf-concepts-kafka + weight: 40 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# SchemaRegistry + +## What is SchemaRegistry + +`SchemaRegistry` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [SchemaRegistry](https://www.apicur.io/registry/) in a Kubernetes native way. You only need to describe the desired configuration in a `SchemaRegistry` object, and the KubeDB operator will create Kubernetes objects in the desired state for you. + +## SchemaRegistry Spec + +As with all other Kubernetes objects, a SchemaRegistry needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. Below is an example SchemaRegistry object. + +```yaml +apiVersion: kafka.kubedb.com/v1alpha1 +kind: SchemaRegistry +metadata: + name: schemaregistry + namespace: demo +spec: + version: 2.5.11.final + healthChecker: + failureThreshold: 3 + periodSeconds: 20 + timeoutSeconds: 10 + replicas: 3 + kafkaRef: + name: kafka + namespace: demo + podTemplate: + metadata: + annotations: + passMe: ToDatabasePod + labels: + thisLabel: willGoToPod + controller: + annotations: + passMe: ToPetSet + labels: + thisLabel: willGoToSts + deletionPolicy: WipeOut +``` + +### spec.version + +`spec.version` is a required field specifying the name of the [SchemaRegistryVersion](/docs/guides/kafka/concepts/schemaregistryversion.md) CR where the docker images are specified. Currently, when you install KubeDB, it creates the following `SchemaRegistryVersion` resources, +- `2.5.11.final` +- `3.15.0` + +### spec.replicas + +`spec.replicas` the number of instances in SchemaRegistry. + +KubeDB uses `PodDisruptionBudget` to ensure that majority of these replicas are available during [voluntary disruptions](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#voluntary-and-involuntary-disruptions) so that quorum is maintained. + +### spec.kafkaRef + +`spec.kafkaRef` is a optional field that specifies the name and namespace of the appbinding for `Kafka` object that the `SchemaRegistry` object is associated with. +```yaml +kafkaRef: + name: + namespace: +``` + +### spec.podTemplate + +KubeDB allows providing a template for pod through `spec.podTemplate`. KubeDB operator will pass the information provided in `spec.podTemplate` to the PetSet created for SchemaRegistry. + +KubeDB accept following fields to set in `spec.podTemplate:` + +- metadata: + - annotations (pod's annotation) + - labels (pod's labels) +- controller: + - annotations (petset's annotation) + - labels (petset's labels) +- spec: + - volumes + - initContainers + - containers + - imagePullSecrets + - nodeSelector + - affinity + - serviceAccountName + - schedulerName + - tolerations + - priorityClassName + - priority + - securityContext + - livenessProbe + - readinessProbe + - lifecycle + +You can check out the full list [here](https://github.com/kmodules/offshoot-api/blob/39bf8b2/api/v2/types.go#L44-L279). Uses of some field of `spec.podTemplate` is described below, + +#### spec.podTemplate.spec.nodeSelector + +`spec.podTemplate.spec.nodeSelector` is an optional field that specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). To learn more, see [here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) . + +#### spec.podTemplate.spec.resources + +`spec.podTemplate.spec.resources` is an optional field. This can be used to request compute resources required by the database pods. To learn more, visit [here](http://kubernetes.io/docs/user-guide/compute-resources/). + +### spec.serviceTemplates + +You can also provide template for the services created by KubeDB operator for Kafka cluster through `spec.serviceTemplates`. This will allow you to set the type and other properties of the services. + +KubeDB allows following fields to set in `spec.serviceTemplates`: +- `alias` represents the identifier of the service. It has the following possible value: + - `stats` is used for the exporter service identification. +- metadata: + - labels + - annotations +- spec: + - type + - ports + - clusterIP + - externalIPs + - loadBalancerIP + - loadBalancerSourceRanges + - externalTrafficPolicy + - healthCheckNodePort + - sessionAffinityConfig + +See [here](https://github.com/kmodules/offshoot-api/blob/kubernetes-1.21.1/api/v1/types.go#L237) to understand these fields in detail. + +### spec.deletionPolicy + +`spec.deletionPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `SchemaRegistry` crd or which resources KubeDB should keep or delete when you delete `SchemaRegistry` crd. KubeDB provides following four deletion policies: + +- Delete +- DoNotTerminate +- WipeOut + +When `deletionPolicy` is `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` feature. If admission webhook is enabled, `DoNotTerminate` prevents users from deleting the database as long as the `spec.deletionPolicy` is set to `DoNotTerminate`. + +## spec.healthChecker +It defines the attributes for the health checker. +- `spec.healthChecker.periodSeconds` specifies how often to perform the health check. +- `spec.healthChecker.timeoutSeconds` specifies the number of seconds after which the probe times out. +- `spec.healthChecker.failureThreshold` specifies minimum consecutive failures for the healthChecker to be considered failed. +- `spec.healthChecker.disableWriteCheck` specifies whether to disable the writeCheck or not. + +Know details about KubeDB Health checking from this [blog post](https://appscode.com/blog/post/kubedb-health-checker/). + +## Next Steps + +- Learn how to use KubeDB to run a Apache Kafka Connect cluster [here](/docs/guides/kafka/README.md). +- Monitor your SchemaRegistry with KubeDB using [`out-of-the-box` Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). +- Detail concepts of [KafkaConnectorVersion object](/docs/guides/kafka/concepts/kafkaconnectorversion.md). +- Learn to use KubeDB managed Kafka objects using [CLIs](/docs/guides/kafka/cli/cli.md). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/kafka/concepts/schemaregistryversion.md b/docs/guides/kafka/concepts/schemaregistryversion.md new file mode 100644 index 0000000000..d1a84915d4 --- /dev/null +++ b/docs/guides/kafka/concepts/schemaregistryversion.md @@ -0,0 +1,93 @@ +--- +title: SchemaRegistryVersion CRD +menu: + docs_{{ .version }}: + identifier: kf-schemaregistryversion-concepts + name: SchemaRegistryVersion + parent: kf-concepts-kafka + weight: 55 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# SchemaRegistryVersion + +## What is SchemaRegistryVersion + +`SchemaRegistryVersion` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration to specify the docker images to be used for install SchemaRegistry and RestProxy with KubeDB in a Kubernetes native way. + +When you install KubeDB, a `SchemaRegistryVersion` custom resource will be created automatically for every supported SchemaRegistry Version. You have to specify list of `SchemaRegistryVersion` CR names in `spec.version` field of SchemaRegistry or RestProxy CR. Then, KubeDB will use the docker images specified in the `SchemaRegistryVersion` cr to install your SchemaRegistry or RestProxy. + +Using a separate CR for specifying respective docker images and policies independent of KubeDB operator. This will also allow the users to use a custom image for the SchemaRegistry or RestProxy. + +## SchemaRegistryVersion Spec + +As with all other Kubernetes objects, a KafkaVersion needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. + +```yaml +apiVersion: catalog.kubedb.com/v1alpha1 +kind: SchemaRegistryVersion +metadata: + annotations: + meta.helm.sh/release-name: kubedb-catalog + meta.helm.sh/release-namespace: kubedb + creationTimestamp: "2024-08-30T04:54:14Z" + generation: 1 + labels: + app.kubernetes.io/instance: kubedb-catalog + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: kubedb-catalog + app.kubernetes.io/version: v2024.8.21 + helm.sh/chart: kubedb-catalog-v2024.8.21 + name: 2.5.11.final + resourceVersion: "133199" + uid: deca9f55-6fef-4477-a66d-7e1fe77d9bbd +spec: + distribution: Apicurio + inMemory: + image: apicurio/apicurio-registry-mem:2.5.11.Final + registry: + image: apicurio/apicurio-registry-kafkasql:2.5.11.Final + securityContext: + runAsUser: 1001 + version: 2.5.11 +``` + +### metadata.name + +`metadata.name` is a required field that specifies the name of the `SchemaRegistryVersion` CR. You have to specify this name in `spec.version` field of SchemaRegistry and RestProxy CR. + +### spec.version + +`spec.version` is a required field that specifies the original version of SchemaRegistry that has been used to build the docker image specified in `spec.registry` field. + +### spec.deprecated + +`spec.deprecated` is an optional field that specifies whether the docker images specified here is supported by the current KubeDB operator. + +The default value of this field is `false`. If `spec.deprecated` is set to `true`, KubeDB operator will skip processing this CRD object and will add a event to the CRD object specifying that the DB version is deprecated. + +### spec.registry.image + +`spec.registry.image` is a required field that specifies the docker image which will be used to install schema registry or restproxy by KubeDB operator. + +### spec.inMemory.image + +`spec.inMemory.image` is a optional field that specifies the docker image which will be used to install schema registry in memory by KubeDB operator. + +```bash +helm upgrade -i kubedb oci://ghcr.io/appscode-charts/kubedb \ + --namespace kubedb --create-namespace \ + --set additionalPodSecurityPolicies[0]=custom-db-policy \ + --set additionalPodSecurityPolicies[1]=custom-snapshotter-policy \ + --set-file global.license=/path/to/the/license.txt \ + --wait --burst-limit=10000 --debug +``` + +## Next Steps + +- Learn about Kafka CRD [here](/docs/guides/kafka/concepts/kafka.md). +- Learn about SchemaRegistry CRD [here](/docs/guides/kafka/concepts/schemaregistry.md). +- Deploy your first ConnectCluster with KubeDB by following the guide [here](/docs/guides/kafka/connectcluster/overview.md). diff --git a/docs/guides/kafka/configuration/_index.md b/docs/guides/kafka/configuration/_index.md new file mode 100644 index 0000000000..81167c2af8 --- /dev/null +++ b/docs/guides/kafka/configuration/_index.md @@ -0,0 +1,10 @@ +--- +title: Run Kafka with Custom Configuration +menu: + docs_{{ .version }}: + identifier: kf-configuration + name: Custom Configuration + parent: kf-kafka-guides + weight: 30 +menu_name: docs_{{ .version }} +--- diff --git a/docs/guides/kafka/configuration/kafka-combined.md b/docs/guides/kafka/configuration/kafka-combined.md new file mode 100644 index 0000000000..fe51efa6e3 --- /dev/null +++ b/docs/guides/kafka/configuration/kafka-combined.md @@ -0,0 +1,164 @@ +--- +title: Configuring Kafka Combined Cluster +menu: + docs_{{ .version }}: + identifier: kf-configuration-combined-cluster + name: Combined Cluster + parent: kf-configuration + weight: 15 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Configure Kafka Combined Cluster + +In Kafka combined cluster, every node can perform as broker and controller nodes simultaneously. In this tutorial, we will see how to configure a combined cluster. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install the KubeDB operator in your cluster following the steps [here](/docs/setup/README.md). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create namespace demo +namespace/demo created + +$ kubectl get namespace +NAME STATUS AGE +demo Active 9s +``` + +> Note: YAML files used in this tutorial are stored in [here](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/kafka/configuration/ +) in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Find Available StorageClass + +We will have to provide `StorageClass` in Kafka CR specification. Check available `StorageClass` in your cluster using the following command, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 1h +``` + +Here, we have `standard` StorageClass in our cluster from [Local Path Provisioner](https://github.com/rancher/local-path-provisioner). + +## Use Custom Configuration + +Say we want to change the default log retention time and default replication factor of creating a topic. Let's create the `server.properties` file with our desire configurations. + +**server.properties:** + +```properties +log.retention.hours=100 +default.replication.factor=2 +``` + +Let's create a k8s secret containing the above configuration where the file name will be the key and the file-content as the value: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: configsecret-combined + namespace: demo +stringData: + server.properties: |- + log.retention.hours=100 + default.replication.factor=2 +``` + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/configuration/configsecret-combined.yaml +secret/configsecret-combined created +``` + +Now that the config secret is created, it needs to be mention in the [Kafka](/docs/guides/kafka/concepts/kafka.md) object's yaml: + +```yaml +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-dev + namespace: demo +spec: + replicas: 2 + version: 3.6.1 + configSecret: + name: configsecret-combined + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut +``` + +Now, create the Kafka object by the following command: + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/configuration/kafka-combined.yaml +kafka.kubedb.com/kafka-dev created +``` + +Now, wait for the Kafka to become ready: + +```bash +$ kubectl get kf -n demo -w +NAME TYPE VERSION STATUS AGE +kafka-dev kubedb.com/v1 3.6.1 Provisioning 0s +kafka-dev kubedb.com/v1 3.6.1 Provisioning 24s +. +. +kafka-dev kubedb.com/v1 3.6.1 Ready 92s +``` + +## Verify Configuration + +Let's exec into one of the kafka pod that we have created and check the configurations are applied or not: + +Exec into the Kafka pod: + +```bash +$ kubectl exec -it -n demo kafka-dev-0 -- bash +kafka@kafka-dev-0:~$ +``` + +Now, execute the following commands to see the configurations: +```bash +kafka@kafka-dev-0:~$ kafka-configs.sh --bootstrap-server localhost:9092 --command-config /opt/kafka/config/clientauth.properties --describe --entity-type brokers --all | grep log.retention.hours + log.retention.hours=100 sensitive=false synonyms={STATIC_BROKER_CONFIG:log.retention.hours=100, DEFAULT_CONFIG:log.retention.hours=168} + log.retention.hours=100 sensitive=false synonyms={STATIC_BROKER_CONFIG:log.retention.hours=100, DEFAULT_CONFIG:log.retention.hours=168} +kafka@kafka-dev-0:~$ kafka-configs.sh --bootstrap-server localhost:9092 --command-config /opt/kafka/config/clientauth.properties --describe --entity-type brokers --all | grep default.replication.factor + default.replication.factor=2 sensitive=false synonyms={STATIC_BROKER_CONFIG:default.replication.factor=2, DEFAULT_CONFIG:default.replication.factor=1} + default.replication.factor=2 sensitive=false synonyms={STATIC_BROKER_CONFIG:default.replication.factor=2, DEFAULT_CONFIG:default.replication.factor=1} +``` +Here, we can see that our given configuration is applied to the Kafka cluster for all brokers. + +## Cleanup + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete kf -n demo kafka-dev +$ kubectl delete secret -n demo configsecret-combined +$ kubectl delete namespace demo +``` + +## Next Steps + +- Detail concepts of [Kafka object](/docs/guides/kafka/concepts/kafka.md). +- Different Kafka topology clustering modes [here](/docs/guides/kafka/clustering/_index.md). +- Monitor your Kafka database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). + +[//]: # (- Monitor your Kafka database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/kafka/monitoring/using-builtin-prometheus.md).) +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). + diff --git a/docs/guides/kafka/configuration/kafka-topology.md b/docs/guides/kafka/configuration/kafka-topology.md new file mode 100644 index 0000000000..c3161647d5 --- /dev/null +++ b/docs/guides/kafka/configuration/kafka-topology.md @@ -0,0 +1,204 @@ +--- +title: Configuring Kafka Topology Cluster +menu: + docs_{{ .version }}: + identifier: kf-configuration-topology-cluster + name: Topology Cluster + parent: kf-configuration + weight: 15 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Configure Kafka Topology Cluster + +In Kafka topology cluster, broker and controller nodes run separately. In this tutorial, we will see how to configure a topology cluster. + +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install the KubeDB operator in your cluster following the steps [here](/docs/setup/README.md). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create namespace demo +namespace/demo created + +$ kubectl get namespace +NAME STATUS AGE +demo Active 9s +``` + +> Note: YAML files used in this tutorial are stored in [here](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/kafka/configuration/ +) in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Find Available StorageClass + +We will have to provide `StorageClass` in Kafka CR specification. Check available `StorageClass` in your cluster using the following command, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 1h +``` + +Here, we have `standard` StorageClass in our cluster from [Local Path Provisioner](https://github.com/rancher/local-path-provisioner). + +## Use Custom Configuration + +Say we want to change the default log retention time and default replication factor of creating a topic of brokers. Let's create the `broker.properties` file with our desire configurations. + +**broker.properties:** + +```properties +log.retention.hours=100 +default.replication.factor=2 +``` + +and we also want to change the metadata.log.dir of the all controller nodes. Let's create the `controller.properties` file with our desire configurations. + +**controller.properties:** + +```properties +metadata.log.dir=/var/log/kafka/metadata-custom +``` + +Let's create a k8s secret containing the above configuration where the file name will be the key and the file-content as the value: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: configsecret-topology + namespace: demo +stringData: + broker.properties: |- + log.retention.hours=100 + default.replication.factor=2 + controller.properties: |- + metadata.log.dir=/var/log/kafka/metadata-custom +``` + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/configuration/configsecret-topology.yaml +secret/configsecret-topology created +``` + +Now that the config secret is created, it needs to be mention in the [Kafka](/docs/guides/kafka/concepts/kafka.md) object's yaml: + +```yaml +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-prod + namespace: demo +spec: + version: 3.6.1 + configSecret: + name: configsecret-topology + topology: + broker: + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + controller: + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut +``` + +Now, create the Kafka object by the following command: + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/configuration/kafka-topology.yaml +kafka.kubedb.com/kafka-prod created +``` + +Now, wait for the Kafka to become ready: + +```bash +$ kubectl get kf -n demo -w +NAME TYPE VERSION STATUS AGE +kafka-prod kubedb.com/v1 3.6.1 Provisioning 5s +kafka-prod kubedb.com/v1 3.6.1 Provisioning 7s +. +. +kafka-prod kubedb.com/v1 3.6.1 Ready 2m +``` + +## Verify Configuration + +Let's exec into one of the kafka broker pod that we have created and check the configurations are applied or not: + +Exec into the Kafka broker: + +```bash +$ kubectl exec -it -n demo kafka-prod-broker-0 -- bash +kafka@kafka-prod-broker-0:~$ +``` + +Now, execute the following commands to see the configurations: +```bash +kafka@kafka-prod-broker-0:~$ kafka-configs.sh --bootstrap-server localhost:9092 --command-config /opt/kafka/config/clientauth.properties --describe --entity-type brokers --all | grep log.retention.hours + log.retention.hours=100 sensitive=false synonyms={STATIC_BROKER_CONFIG:log.retention.hours=100, DEFAULT_CONFIG:log.retention.hours=168} + log.retention.hours=100 sensitive=false synonyms={STATIC_BROKER_CONFIG:log.retention.hours=100, DEFAULT_CONFIG:log.retention.hours=168} +kafka@kafka-prod-broker-0:~$ kafka-configs.sh --bootstrap-server localhost:9092 --command-config /opt/kafka/config/clientauth.properties --describe --entity-type brokers --all | grep default.replication.factor + default.replication.factor=2 sensitive=false synonyms={STATIC_BROKER_CONFIG:default.replication.factor=2, DEFAULT_CONFIG:default.replication.factor=1} + default.replication.factor=2 sensitive=false synonyms={STATIC_BROKER_CONFIG:default.replication.factor=2, DEFAULT_CONFIG:default.replication.factor=1} +``` +Here, we can see that our given configuration is applied to the Kafka cluster for all brokers. + +Now, let's exec into one of the kafka controller pod that we have created and check the configurations are applied or not: + +Exec into the Kafka controller: + +```bash +$ kubectl exec -it -n demo kafka-prod-controller-0 -- bash +kafka@kafka-prod-controller-0:~$ +``` + +Now, execute the following commands to see the metadata storage directory: +```bash +kafka@kafka-prod-controller-0:~$ ls /var/log/kafka/ +1000 cluster_id metadata-custom +``` + +Here, we can see that our given configuration is applied to the controller. Metadata log directory is changed to `/var/log/kafka/metadata-custom`. + +## Cleanup + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl delete kf -n demo kafka-dev + +$ kubectl delete secret -n demo configsecret-combined + +$ kubectl delete namespace demo +``` + +## Next Steps + +- Detail concepts of [Kafka object](/docs/guides/kafka/concepts/kafka.md). +- Different Kafka topology clustering modes [here](/docs/guides/kafka/clustering/_index.md). +- Monitor your Kafka database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). + +[//]: # (- Monitor your Kafka database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/kafka/monitoring/using-builtin-prometheus.md).) +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). + diff --git a/docs/guides/kafka/connectcluster/connectcluster.md b/docs/guides/kafka/connectcluster/connectcluster.md index 794602c753..36d2d2d5a8 100644 --- a/docs/guides/kafka/connectcluster/connectcluster.md +++ b/docs/guides/kafka/connectcluster/connectcluster.md @@ -182,7 +182,7 @@ Hence, the cluster is ready to use. Let's check the k8s resources created by the operator on the deployment of ConnectCluster: ```bash -$ kubectl get all,secret -n demo -l 'app.kubernetes.io/instance=connectcluster-distributed' +$ kubectl get all,petset,secret -n demo -l 'app.kubernetes.io/instance=connectcluster-distributed' NAME READY STATUS RESTARTS AGE pod/connectcluster-distributed-0 1/1 Running 0 8m55s pod/connectcluster-distributed-1 1/1 Running 0 8m52s @@ -191,8 +191,8 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP service/connectcluster-distributed ClusterIP 10.128.238.9 8083/TCP 17m service/connectcluster-distributed-pods ClusterIP None 8083/TCP 17m -NAME READY AGE -petset.apps/connectcluster-distributed 2/2 8m56s +NAME READY AGE +petset.apps.k8s.appscode.com/connectcluster-distributed 2/2 8m56s NAME TYPE VERSION AGE appbinding.appcatalog.appscode.com/connectcluster-distributed kafka.kubedb.com/connectcluster 3.6.1 8m56s @@ -502,8 +502,8 @@ If you are just testing some basic functionalities, you might want to avoid addi ## Next Steps -- [Quickstart Kafka](/docs/guides/kafka/quickstart/overview/kafka/index.md) with KubeDB Operator. -- [Quickstart ConnectCluster](/docs/guides/kafka/quickstart/overview/connectcluster/index.md) with KubeDB Operator. +- [Quickstart Kafka](/docs/guides/kafka/quickstart/kafka/index.md) with KubeDB Operator. +- [Quickstart ConnectCluster](/docs/guides/kafka/connectcluster/overview.md) with KubeDB Operator. - Use [kubedb cli](/docs/guides/kafka/cli/cli.md) to manage databases like kubectl for Kubernetes. - Detail concepts of [ConnectCluster object](/docs/guides/kafka/concepts/connectcluster.md). - Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/kafka/quickstart/overview/connectcluster/index.md b/docs/guides/kafka/connectcluster/overview.md similarity index 94% rename from docs/guides/kafka/quickstart/overview/connectcluster/index.md rename to docs/guides/kafka/connectcluster/overview.md index 30c3622da6..5cec9c8e62 100644 --- a/docs/guides/kafka/quickstart/overview/connectcluster/index.md +++ b/docs/guides/kafka/connectcluster/overview.md @@ -2,10 +2,10 @@ title: ConnectCluster Quickstart menu: docs_{{ .version }}: - identifier: kf-kafka-overview-connectcluster - name: ConnectCluster - parent: kf-overview-kafka - weight: 15 + identifier: kf-connectcluster-guides-quickstart + name: Overview + parent: kf-connectcluster-guides + weight: 5 menu_name: docs_{{ .version }} section_menu_id: guides --- @@ -37,9 +37,9 @@ NAME STATUS AGE demo Active 9s ``` -> Note: YAML files used in this tutorial are stored in [guides/kafka/quickstart/overview/connectcluster/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/kafka/quickstart/overview/connectcluster/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). +> Note: YAML files used in this tutorial are stored in [guides/kafka/quickstart/connectcluster/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/kafka/quickstart/connectcluster/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). -> We have designed this tutorial to demonstrate a production setup of KubeDB managed Apache Kafka Connect Cluster. If you just want to try out KubeDB, you can bypass some safety features following the tips [here](/docs/guides/kafka/quickstart/overview/connectcluster/index.md#tips-for-testing). +> We have designed this tutorial to demonstrate a production setup of KubeDB managed Apache Kafka Connect Cluster. If you just want to try out KubeDB, you can bypass some safety features following the tips [here](/docs/guides/kafka/connectcluster/overview.md#tips-for-testing). ## Find Available ConnectCluster Versions @@ -128,7 +128,7 @@ Here, - `spec.deletionPolicy` specifies what KubeDB should do when a user try to delete ConnectCluster CR. Deletion policy `WipeOut` will delete the worker pods, secret when the ConnectCluster CR is deleted. ## N.B: -1. If replicas are set to 1, the ConnectCluster will run in standalone mode, you can't scale replica after provision the cluster. +1. If replicas are set to 1, the ConnectCluster will run in standalone mode, you can't scale replica after provision the cluster. 2. If replicas are set to more than 1, the ConnectCluster will run in distributed mode. 3. If you want to run the ConnectCluster in distributed mode with 1 replica, you must set the `CONNECT_CLUSTER_MODE` environment variable to `distributed` in the pod template. ```yaml @@ -142,11 +142,11 @@ spec: value: distributed ``` -Before create ConnectCluster, you have to deploy a `Kafka` cluster first. To deploy kafka cluster, follow the [Kafka Quickstart](/docs/guides/kafka/quickstart/overview/kafka/index.md) guide. Let's assume `kafka-quickstart` is already deployed using KubeDB. +Before create ConnectCluster, you have to deploy a `Kafka` cluster first. To deploy kafka cluster, follow the [Kafka Quickstart](/docs/guides/kafka/quickstart/kafka/index.md) guide. Let's assume `kafka-quickstart` is already deployed using KubeDB. Let's create the ConnectCluster CR that is shown above: ```bash -$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/kafka/quickstart/overview/connectcluster/yamls/connectcluster.yaml +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/kafka/quickstart/connectcluster/yamls/connectcluster-quickstart.yaml connectcluster.kafka.kubedb.com/connectcluster-quickstart created ``` @@ -336,7 +336,7 @@ Events: On deployment of a ConnectCluster CR, the operator creates the following resources: ```bash -$ kubectl get all,secret -n demo -l 'app.kubernetes.io/instance=connectcluster-quickstart' +$ kubectl get all,petset,secret -n demo -l 'app.kubernetes.io/instance=connectcluster-quickstart' NAME READY STATUS RESTARTS AGE pod/connectcluster-quickstart-0 1/1 Running 0 3m50s pod/connectcluster-quickstart-1 1/1 Running 0 3m7s @@ -346,8 +346,8 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP service/connectcluster-quickstart ClusterIP 10.128.221.44 8083/TCP 3m55s service/connectcluster-quickstart-pods ClusterIP None 8083/TCP 3m55s -NAME READY AGE -petset.apps/connectcluster-quickstart 3/3 3m50s +NAME READY AGE +petset.apps.k8s.appscode.com/connectcluster-quickstart 3/3 3m50s NAME TYPE VERSION AGE appbinding.appcatalog.appscode.com/connectcluster-quickstart kafka.kubedb.com/connectcluster 3.6.1 3m50s @@ -392,7 +392,7 @@ $ cat config.properties value.converter.schemas.enable=false ``` -Here, +Here, 1. A MongoDB instance is already running. You can use your own MongoDB instance. To run mongodb instance, follow the [MongoDB Quickstart](/docs/guides/mongodb/quickstart/quickstart.md) guide. 2. Update `connection.uri` with your MongoDB URI. Example: `mongodb://::/`. 3. Update `database` and `collection` as per your MongoDB database and collection name. We are using `mongodb` and `source` as database and collection name respectively. @@ -430,7 +430,7 @@ Here, Now, create the `Connector` CR that is shown above: ```bash -$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/kafka/quickstart/overview/connectcluster/yamls/mongodb-source-connector.yaml +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/kafka/quickstart/connectcluster/yamls/mongodb-source-connector.yaml connector.kafka.kubedb.com/mongodb-source-connector created ``` @@ -514,8 +514,8 @@ If you are just testing some basic functionalities, you might want to avoid addi ## Next Steps -- [Quickstart Kafka](/docs/guides/kafka/quickstart/overview/kafka/index.md) with KubeDB Operator. -- [Quickstart ConnectCluster](/docs/guides/kafka/quickstart/overview/connectcluster/index.md) with KubeDB Operator. +- [Quickstart Kafka](/docs/guides/kafka/quickstart/kafka/index.md) with KubeDB Operator. +- [Quickstart ConnectCluster](/docs/guides/kafka/connectcluster/overview.md) with KubeDB Operator. - Use [kubedb cli](/docs/guides/kafka/cli/cli.md) to manage databases like kubectl for Kubernetes. - Detail concepts of [ConnectCluster object](/docs/guides/kafka/concepts/connectcluster.md). - Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/kafka/monitoring/using-builtin-prometheus.md b/docs/guides/kafka/monitoring/using-builtin-prometheus.md new file mode 100644 index 0000000000..6d29116adc --- /dev/null +++ b/docs/guides/kafka/monitoring/using-builtin-prometheus.md @@ -0,0 +1,371 @@ +--- +title: Monitor Kafka using Builtin Prometheus Discovery +menu: + docs_{{ .version }}: + identifier: kf-using-builtin-prometheus-monitoring + name: Builtin Prometheus + parent: kf-monitoring-kafka + weight: 20 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Monitoring Kafka with builtin Prometheus + +This tutorial will show you how to monitor Kafka cluster using builtin [Prometheus](https://github.com/prometheus/prometheus) scraper. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install KubeDB operator in your cluster following the steps [here](/docs/setup/README.md). + +- If you are not familiar with how to configure Prometheus to scrape metrics from various Kubernetes resources, please read the tutorial from [here](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin). + +- To learn how Prometheus monitoring works with KubeDB in general, please visit [here](/docs/guides/kafka/monitoring/overview.md). + +- To keep Prometheus resources isolated, we are going to use a separate namespace called `monitoring` to deploy respective monitoring resources. We are going to deploy database in `demo` namespace. + + ```bash + $ kubectl create ns monitoring + namespace/monitoring created + + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/kafka](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/kafka) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy Kafka with Monitoring Enabled + +At first, let's deploy a Kafka cluster with monitoring enabled. Below is the Kafka object that we are going to create. + +```yaml +apiVersion: kubedb.com/v1alpha2 +kind: Kafka +metadata: + name: kafka-builtin-prom + namespace: demo +spec: + replicas: 2 + version: 3.6.1 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + monitor: + agent: prometheus.io/builtin + prometheus: + exporter: + port: 56790 + serviceMonitor: + labels: + release: prometheus + interval: 10s + deletionPolicy: WipeOut +``` + +Here, + +- `spec.monitor.agent: prometheus.io/builtin` specifies that we are going to monitor this server using builtin Prometheus scraper. +- `spec.monitor.prometheus.exporter.port: 56790` specifies the port where the exporter is running. + +Let's create the Kafka crd we have shown above. + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/monitoring/kafka-builtin-prom.yaml +kafka.kubedb.com/kafka-builtin-prom created +``` + +Now, wait for the cluster to go into `Ready` state. + +```bash +NAME TYPE VERSION STATUS AGE +kafka-builtin-prom kubedb.com/v1 3.6.1 Ready 31s +``` + +KubeDB will create a separate stats service with name `{Kafka crd name}-stats` for monitoring purpose. + +```bash +$ kubectl get svc -n demo --selector="app.kubernetes.io/instance=kafka-builtin-prom" +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +kafka-builtin-prom-pods ClusterIP None 9092/TCP,9093/TCP,29092/TCP 52s +kafka-builtin-prom-stats ClusterIP 10.96.222.96 56790/TCP 52s +``` + +Here, `kafka-builtin-prom-stats` service has been created for monitoring purpose. Let's describe the service. + +```bash +$ kubectl describe svc -n kafka-demo builtin-prom-stats +Name: kafka-builtin-prom-stats +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=kafka-builtin-prom + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=kafkas.kubedb.com + kubedb.com/role=stats +Annotations: monitoring.appscode.com/agent: prometheus.io/builtin + prometheus.io/path: /metrics + prometheus.io/port: 56790 + prometheus.io/scrape: true +Selector: app.kubernetes.io/instance=kafka-builtin-prom,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=kafkas.kubedb.com +Type: ClusterIP +IP Family Policy: SingleStack +IP Families: IPv4 +IP: 10.96.222.96 +IPs: 10.96.222.96 +Port: metrics 56790/TCP +TargetPort: metrics/TCP +Endpoints: 10.244.0.31:56790,10.244.0.33:56790 +Session Affinity: None +Events: +``` + +You can see that the service contains following annotations. + +```bash +prometheus.io/path: /metrics +prometheus.io/port: 56790 +prometheus.io/scrape: true +``` + +The Prometheus server will discover the service endpoint using these specifications and will scrape metrics from the exporter. + +## Configure Prometheus Server + +Now, we have to configure a Prometheus scraping job to scrape the metrics using this service. We are going to configure scraping job similar to this [kubernetes-service-endpoints](https://github.com/appscode/third-party-tools/tree/master/monitoring/prometheus/builtin#kubernetes-service-endpoints) job that scrapes metrics from endpoints of a service. + +Let's configure a Prometheus scraping job to collect metrics from this service. + +```yaml +- job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) +``` + +### Configure Existing Prometheus Server + +If you already have a Prometheus server running, you have to add above scraping job in the `ConfigMap` used to configure the Prometheus server. Then, you have to restart it for the updated configuration to take effect. + +>If you don't use a persistent volume for Prometheus storage, you will lose your previously scraped data on restart. + +### Deploy New Prometheus Server + +If you don't have any existing Prometheus server running, you have to deploy one. In this section, we are going to deploy a Prometheus server in `monitoring` namespace to collect metrics using this stats service. + +**Create ConfigMap:** + +At first, create a ConfigMap with the scraping configuration. Bellow, the YAML of ConfigMap that we are going to create in this tutorial. + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: prometheus-config + labels: + app: prometheus-demo + namespace: monitoring +data: + prometheus.yml: |- + global: + scrape_interval: 5s + evaluation_interval: 5s + scrape_configs: + - job_name: 'kubedb-databases' + honor_labels: true + scheme: http + kubernetes_sd_configs: + - role: endpoints + # by default Prometheus server select all Kubernetes services as possible target. + # relabel_config is used to filter only desired endpoints + relabel_configs: + # keep only those services that has "prometheus.io/scrape","prometheus.io/path" and "prometheus.io/port" anootations + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape, __meta_kubernetes_service_annotation_prometheus_io_port] + separator: ; + regex: true;(.*) + action: keep + # currently KubeDB supported databases uses only "http" scheme to export metrics. so, drop any service that uses "https" scheme. + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: drop + regex: https + # only keep the stats services created by KubeDB for monitoring purpose which has "-stats" suffix + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*-stats) + action: keep + # service created by KubeDB will have "app.kubernetes.io/name" and "app.kubernetes.io/instance" annotations. keep only those services that have these annotations. + - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] + separator: ; + regex: (.*) + action: keep + # read the metric path from "prometheus.io/path: " annotation + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + # read the port from "prometheus.io/port: " annotation and update scraping address accordingly + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:$2 + # add service namespace as label to the scraped metrics + - source_labels: [__meta_kubernetes_namespace] + separator: ; + regex: (.*) + target_label: namespace + replacement: $1 + action: replace + # add service name as a label to the scraped metrics + - source_labels: [__meta_kubernetes_service_name] + separator: ; + regex: (.*) + target_label: service + replacement: $1 + action: replace + # add stats service's labels to the scraped metrics + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) +``` + +Let's create above `ConfigMap`, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/monitoring/builtin-prometheus/prom-config.yaml +configmap/prometheus-config created +``` + +**Create RBAC:** + +If you are using an RBAC enabled cluster, you have to give necessary RBAC permissions for Prometheus. Let's create necessary RBAC stuffs for Prometheus, + +```bash +$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/rbac.yaml +clusterrole.rbac.authorization.k8s.io/prometheus created +serviceaccount/prometheus created +clusterrolebinding.rbac.authorization.k8s.io/prometheus created +``` + +>YAML for the RBAC resources created above can be found [here](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/rbac.yaml). + +**Deploy Prometheus:** + +Now, we are ready to deploy Prometheus server. We are going to use following [deployment](https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/artifacts/deployment.yaml) to deploy Prometheus server. + +Let's deploy the Prometheus server. + +```bash +$ kubectl apply -f https://github.com/appscode/third-party-tools/raw/master/monitoring/prometheus/builtin/artifacts/deployment.yaml +deployment.apps/prometheus created +``` + +### Verify Monitoring Metrics + +Prometheus server is listening to port `9090`. We are going to use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard. + +At first, let's check if the Prometheus pod is in `Running` state. + +```bash +$ kubectl get pod -n monitoring -l=app=prometheus +NAME READY STATUS RESTARTS AGE +prometheus-7bd56c6865-8dlpv 1/1 Running 0 28s +``` + +Now, run following command on a separate terminal to forward 9090 port of `prometheus-7bd56c6865-8dlpv` pod, + +```bash +$ kubectl port-forward -n monitoring prometheus-7bd56c6865-8dlpv 9090 +Forwarding from 127.0.0.1:9090 -> 9090 +Forwarding from [::1]:9090 -> 9090 +``` + +Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see the endpoint of `kafka-builtin-prom-stats` service as one of the targets. + +

+  Prometheus Target +

+ +Check the labels marked with red rectangle. These labels confirm that the metrics are coming from `Kafka` cluster `kafka-builtin-prom` through stats service `kafka-builtin-prom-stats`. + +Now, you can view the collected metrics and create a graph from homepage of this Prometheus dashboard. You can also use this Prometheus server as data source for [Grafana](https://grafana.com/) and create beautiful dashboard with collected metrics. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run following commands + +```bash +kubectl delete -n demo kafka/kafka-builtin-prom + +kubectl delete -n monitoring deployment.apps/prometheus + +kubectl delete -n monitoring clusterrole.rbac.authorization.k8s.io/prometheus +kubectl delete -n monitoring serviceaccount/prometheus +kubectl delete -n monitoring clusterrolebinding.rbac.authorization.k8s.io/prometheus + +kubectl delete ns demo +kubectl delete ns monitoring +``` + +## Next Steps + +- Learn how to configure [Kafka Topology](/docs/guides/kafka/clustering/topology-cluster/index.md). +- Monitor your Kafka database with KubeDB using [`out-of-the-box` Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/kafka/quickstart/overview/kafka/index.md b/docs/guides/kafka/quickstart/kafka/index.md similarity index 96% rename from docs/guides/kafka/quickstart/overview/kafka/index.md rename to docs/guides/kafka/quickstart/kafka/index.md index d4e983a110..48cb813564 100644 --- a/docs/guides/kafka/quickstart/overview/kafka/index.md +++ b/docs/guides/kafka/quickstart/kafka/index.md @@ -2,9 +2,9 @@ title: Kafka Quickstart menu: docs_{{ .version }}: - identifier: kf-kafka-overview-kafka + identifier: kf-kafka-quickstart-kafka name: Kafka - parent: kf-overview-kafka + parent: kf-quickstart-kafka weight: 10 menu_name: docs_{{ .version }} section_menu_id: guides @@ -37,9 +37,9 @@ NAME STATUS AGE demo Active 9s ``` -> Note: YAML files used in this tutorial are stored in [guides/kafka/quickstart/overview/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/kafka/quickstart/overview/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). +> Note: YAML files used in this tutorial are stored in [guides/kafka/quickstart/kafka/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/kafka/quickstart/kafka/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). -> We have designed this tutorial to demonstrate a production setup of KubeDB managed Apache Kafka. If you just want to try out KubeDB, you can bypass some safety features following the tips [here](/docs/guides/kafka/quickstart/overview/kafka/index.md#tips-for-testing). +> We have designed this tutorial to demonstrate a production setup of KubeDB managed Apache Kafka. If you just want to try out KubeDB, you can bypass some safety features following the tips [here](/docs/guides/kafka/quickstart/kafka/index.md#tips-for-testing). ## Find Available StorageClass @@ -102,7 +102,7 @@ spec: ``` ```bash -$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/Kafka/quickstart/overview/kafka/yamls/kafka-v1.yaml +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/Kafka/quickstart/kafka/yamls/kafka-v1.yaml kafka.kubedb.com/kafka-quickstart created ``` @@ -127,7 +127,7 @@ spec: ``` ```bash -$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/Kafka/quickstart/overview/kafka/yamls/kafka-v1alpha2.yaml +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/Kafka/quickstart/kafka/yamls/kafka-v1alpha2.yaml kafka.kubedb.com/kafka-quickstart created ``` @@ -434,8 +434,8 @@ If you are just testing some basic functionalities, you might want to avoid addi ## Next Steps -- [Quickstart Kafka](/docs/guides/kafka/quickstart/overview/kafka/index.md) with KubeDB Operator. -- [Quickstart ConnectCluster](/docs/guides/kafka/quickstart/overview/connectcluster/index.md) with KubeDB Operator. +- [Quickstart Kafka](/docs/guides/kafka/quickstart/kafka/index.md) with KubeDB Operator. +- [Quickstart ConnectCluster](/docs/guides/kafka/connectcluster/overview.md) with KubeDB Operator. - Kafka Clustering supported by KubeDB - [Combined Clustering](/docs/guides/kafka/clustering/combined-cluster/index.md) - [Topology Clustering](/docs/guides/kafka/clustering/topology-cluster/index.md) diff --git a/docs/guides/kafka/quickstart/overview/kafka/yamls/kafka-v1.yaml b/docs/guides/kafka/quickstart/kafka/yamls/kafka-v1.yaml similarity index 100% rename from docs/guides/kafka/quickstart/overview/kafka/yamls/kafka-v1.yaml rename to docs/guides/kafka/quickstart/kafka/yamls/kafka-v1.yaml diff --git a/docs/guides/kafka/quickstart/overview/kafka/yamls/kafka-v1alpha2.yaml b/docs/guides/kafka/quickstart/kafka/yamls/kafka-v1alpha2.yaml similarity index 100% rename from docs/guides/kafka/quickstart/overview/kafka/yamls/kafka-v1alpha2.yaml rename to docs/guides/kafka/quickstart/kafka/yamls/kafka-v1alpha2.yaml diff --git a/docs/guides/kafka/quickstart/overview/_index.md b/docs/guides/kafka/quickstart/overview/_index.md deleted file mode 100644 index 1991f1aef8..0000000000 --- a/docs/guides/kafka/quickstart/overview/_index.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Kafka Overview -menu: - docs_{{ .version }}: - identifier: kf-overview-kafka - name: Overview - parent: kf-quickstart-kafka - weight: 10 -menu_name: docs_{{ .version }} ---- diff --git a/docs/guides/kafka/reconfigure-tls/_index.md b/docs/guides/kafka/reconfigure-tls/_index.md new file mode 100644 index 0000000000..5b2552a6df --- /dev/null +++ b/docs/guides/kafka/reconfigure-tls/_index.md @@ -0,0 +1,10 @@ +--- +title: Reconfigure TLS/SSL +menu: + docs_{{ .version }}: + identifier: kf-reconfigure-tls + name: Reconfigure TLS/SSL + parent: kf-kafka-guides + weight: 46 +menu_name: docs_{{ .version }} +--- diff --git a/docs/guides/kafka/reconfigure-tls/kafka.md b/docs/guides/kafka/reconfigure-tls/kafka.md new file mode 100644 index 0000000000..10a33bd741 --- /dev/null +++ b/docs/guides/kafka/reconfigure-tls/kafka.md @@ -0,0 +1,1088 @@ +--- +title: Reconfigure Kafka TLS/SSL Encryption +menu: + docs_{{ .version }}: + identifier: kf-reconfigure-tls-kafka + name: Reconfigure Kafka TLS/SSL Encryption + parent: kf-reconfigure-tls + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Reconfigure Kafka TLS/SSL (Transport Encryption) + +KubeDB supports reconfigure i.e. add, remove, update and rotation of TLS/SSL certificates for existing Kafka database via a KafkaOpsRequest. This tutorial will show you how to use KubeDB to reconfigure TLS/SSL encryption. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install [`cert-manger`](https://cert-manager.io/docs/installation/) v1.0.0 or later to your cluster to manage your SSL/TLS certificates. + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/kafka](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/kafka) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Add TLS to a Kafka database + +Here, We are going to create a Kafka without TLS and then reconfigure the database to use TLS. + +### Deploy Kafka without TLS + +In this section, we are going to deploy a Kafka topology cluster without TLS. In the next few sections we will reconfigure TLS using `KafkaOpsRequest` CRD. Below is the YAML of the `Kafka` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-prod + namespace: demo +spec: + version: 3.6.1 + topology: + broker: + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + controller: + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut +``` + +Let's create the `Kafka` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/reconfigure-tls/kafka.yaml +kafka.kubedb.com/kafka-prod created +``` + +Now, wait until `kafka-prod` has status `Ready`. i.e, + +```bash +$ kubectl get kf -n demo -w +NAME TYPE VERSION STATUS AGE +kafka-prod kubedb.com/v1 3.6.1 Provisioning 0s +kafka-prod kubedb.com/v1 3.6.1 Provisioning 9s +. +. +kafka-prod kubedb.com/v1 3.6.1 Ready 2m10s +``` + +Now, we can exec one kafka broker pod and verify configuration that the TLS is disabled. + +```bash +$ kubectl exec -it -n demo kafka-prod-broker-0 -- kafka-configs.sh --bootstrap-server localhost:9092 --command-config /opt/kafka/config/clientauth.properties --describe --entity-type brokers --all | grep 'ssl.keystore' + ssl.keystore.certificate.chain=null sensitive=true synonyms={} + ssl.keystore.key=null sensitive=true synonyms={} + ssl.keystore.location=null sensitive=false synonyms={} + ssl.keystore.password=null sensitive=true synonyms={} + ssl.keystore.type=JKS sensitive=false synonyms={DEFAULT_CONFIG:ssl.keystore.type=JKS} + ssl.keystore.certificate.chain=null sensitive=true synonyms={} + ssl.keystore.key=null sensitive=true synonyms={} + ssl.keystore.location=null sensitive=false synonyms={} + ssl.keystore.password=null sensitive=true synonyms={} + ssl.keystore.type=JKS sensitive=false synonyms={DEFAULT_CONFIG:ssl.keystore.type=JKS} +``` + +We can verify from the above output that TLS is disabled for this cluster. + +### Create Issuer/ ClusterIssuer + +Now, We are going to create an example `Issuer` that will be used to enable SSL/TLS in Kafka. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. + +- Start off by generating a ca certificates using openssl. + +```bash +$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=ca/O=kubedb" +Generating a RSA private key +................+++++ +........................+++++ +writing new private key to './ca.key' +----- +``` + +- Now we are going to create a ca-secret using the certificate files that we have just generated. + +```bash +$ kubectl create secret tls kafka-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +secret/kafka-ca created +``` + +Now, Let's create an `Issuer` using the `kafka-ca` secret that we have just created. The `YAML` file looks like this: + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: kf-issuer + namespace: demo +spec: + ca: + secretName: kafka-ca +``` + +Let's apply the `YAML` file: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/reconfigure-tls/kafka-issuer.yaml +issuer.cert-manager.io/kf-issuer created +``` + +### Create KafkaOpsRequest + +In order to add TLS to the kafka, we have to create a `KafkaOpsRequest` CRO with our created issuer. Below is the YAML of the `KafkaOpsRequest` CRO that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-add-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: kafka-prod + tls: + issuerRef: + name: kf-issuer + kind: Issuer + apiGroup: "cert-manager.io" + certificates: + - alias: client + subject: + organizations: + - kafka + organizationalUnits: + - client + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `kafka-prod` cluster. +- `spec.type` specifies that we are performing `ReconfigureTLS` on kafka. +- `spec.tls.issuerRef` specifies the issuer name, kind and api group. +- `spec.tls.certificates` specifies the certificates. You can learn more about this field from [here](/docs/guides/kafka/concepts/kafka.md#spectls). + +Let's create the `KafkaOpsRequest` CR we have shown above, + +> **Note:** For combined kafka, you just need to refer kafka combined object in `databaseRef` field. To learn more about combined kafka, please visit [here](/docs/guides/kafka/clustering/combined-cluster/index.md). + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/reconfigure-tls/kafka-add-tls.yaml +kafkaopsrequest.ops.kubedb.com/kfops-add-tls created +``` + +#### Verify TLS Enabled Successfully + +Let's wait for `KafkaOpsRequest` to be `Successful`. Run the following command to watch `KafkaOpsRequest` CRO, + +```bash +$ kubectl get kafkaopsrequest -n demo +NAME TYPE STATUS AGE +kfops-add-tls ReconfigureTLS Successful 4m36s +``` + +We can see from the above output that the `KafkaOpsRequest` has succeeded. If we describe the `KafkaOpsRequest` we will get an overview of the steps that were followed. + +```bash +$ kubectl describe kafkaopsrequest -n demo kfops-add-tls +Name: kfops-add-tls +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: KafkaOpsRequest +Metadata: + Creation Timestamp: 2024-07-31T06:36:27Z + Generation: 1 + Resource Version: 158448 + UID: 9c95ef81-2db8-4740-9708-60618ab57db5 +Spec: + Apply: IfReady + Database Ref: + Name: kafka-prod + Timeout: 5m + Tls: + Certificates: + Alias: client + Subject: + Organizational Units: + client + Organizations: + kafka + Issuer Ref: + API Group: cert-manager.io + Kind: Issuer + Name: kf-issuer + Type: ReconfigureTLS +Status: + Conditions: + Last Transition Time: 2024-07-31T06:36:27Z + Message: Kafka ops-request has started to reconfigure tls for kafka nodes + Observed Generation: 1 + Reason: ReconfigureTLS + Status: True + Type: ReconfigureTLS + Last Transition Time: 2024-07-31T06:36:36Z + Message: get certificate; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetCertificate + Last Transition Time: 2024-07-31T06:36:36Z + Message: check ready condition; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: CheckReadyCondition + Last Transition Time: 2024-07-31T06:36:36Z + Message: issuing condition; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: IssuingCondition + Last Transition Time: 2024-07-31T06:36:37Z + Message: Successfully synced all certificates + Observed Generation: 1 + Reason: CertificateSynced + Status: True + Type: CertificateSynced + Last Transition Time: 2024-07-31T06:38:45Z + Message: successfully reconciled the Kafka with tls configuration + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2024-07-31T06:38:50Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-controller-0 + Last Transition Time: 2024-07-31T06:38:50Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-controller-0 + Last Transition Time: 2024-07-31T06:39:06Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-controller-0 + Last Transition Time: 2024-07-31T06:39:10Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-controller-1 + Last Transition Time: 2024-07-31T06:39:10Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-controller-1 + Last Transition Time: 2024-07-31T06:39:25Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-controller-1 + Last Transition Time: 2024-07-31T06:39:30Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-broker-0 + Last Transition Time: 2024-07-31T06:39:35Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-broker-0 + Last Transition Time: 2024-07-31T06:39:45Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-broker-0 + Last Transition Time: 2024-07-31T06:39:50Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-broker-1 + Last Transition Time: 2024-07-31T06:39:50Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-broker-1 + Last Transition Time: 2024-07-31T06:40:05Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-broker-1 + Last Transition Time: 2024-07-31T06:40:10Z + Message: Successfully restarted all nodes + Observed Generation: 1 + Reason: RestartNodes + Status: True + Type: RestartNodes + Last Transition Time: 2024-07-31T06:40:11Z + Message: Successfully completed reconfigureTLS for kafka. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 4m59s KubeDB Ops-manager Operator Start processing for KafkaOpsRequest: demo/kfops-add-tls + Normal Starting 4m59s KubeDB Ops-manager Operator Pausing Kafka databse: demo/kafka-prod + Normal Successful 4m59s KubeDB Ops-manager Operator Successfully paused Kafka database: demo/kafka-prod for KafkaOpsRequest: kfops-add-tls + Warning get certificate; ConditionStatus:True 4m51s KubeDB Ops-manager Operator get certificate; ConditionStatus:True + Warning check ready condition; ConditionStatus:True 4m50s KubeDB Ops-manager Operator check ready condition; ConditionStatus:True + Warning issuing condition; ConditionStatus:True 4m50s KubeDB Ops-manager Operator issuing condition; ConditionStatus:True + Warning get certificate; ConditionStatus:True 4m49s KubeDB Ops-manager Operator get certificate; ConditionStatus:True + Warning check ready condition; ConditionStatus:True 4m49s KubeDB Ops-manager Operator check ready condition; ConditionStatus:True + Warning issuing condition; ConditionStatus:True 4m49s KubeDB Ops-manager Operator issuing condition; ConditionStatus:True + Normal CertificateSynced 4m49s KubeDB Ops-manager Operator Successfully synced all certificates + Warning get certificate; ConditionStatus:True 4m44s KubeDB Ops-manager Operator get certificate; ConditionStatus:True + Warning check ready condition; ConditionStatus:True 4m44s KubeDB Ops-manager Operator check ready condition; ConditionStatus:True + Warning issuing condition; ConditionStatus:True 4m44s KubeDB Ops-manager Operator issuing condition; ConditionStatus:True + Warning get certificate; ConditionStatus:True 4m43s KubeDB Ops-manager Operator get certificate; ConditionStatus:True + Warning check ready condition; ConditionStatus:True 4m43s KubeDB Ops-manager Operator check ready condition; ConditionStatus:True + Warning issuing condition; ConditionStatus:True 4m43s KubeDB Ops-manager Operator issuing condition; ConditionStatus:True + Normal CertificateSynced 4m43s KubeDB Ops-manager Operator Successfully synced all certificates + Normal UpdatePetSets 2m41s KubeDB Ops-manager Operator successfully reconciled the Kafka with tls configuration + Warning get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 2m36s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 2m36s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-controller-0 2m31s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-controller-0 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 2m21s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 + Warning get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 2m16s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 2m16s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-controller-1 2m11s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-controller-1 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 2m1s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 + Warning get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 116s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Warning evict pod; ConditionStatus:False; PodName:kafka-prod-broker-0 116s KubeDB Ops-manager Operator evict pod; ConditionStatus:False; PodName:kafka-prod-broker-0 + Warning get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 111s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 111s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-broker-0 106s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-broker-0 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 101s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 + Warning get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 96s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 96s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-broker-1 91s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-broker-1 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 81s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 + Normal RestartNodes 76s KubeDB Ops-manager Operator Successfully restarted all nodes + Normal Starting 76s KubeDB Ops-manager Operator Resuming Kafka database: demo/kafka-prod + Normal Successful 76s KubeDB Ops-manager Operator Successfully resumed Kafka database: demo/kafka-prod for KafkaOpsRequest: kfops-add-tls +``` + +Now, Let's exec into a kafka broker pod and verify the configuration that the TLS is enabled. + +```bash +$ kubectl exec -it -n demo kafka-prod-broker-0 -- kafka-configs.sh --bootstrap-server localhost:9092 --command-config /opt/kafka/config/clientauth.properties --describe --entity-type brokers --all | grep 'ssl.keystore' + ssl.keystore.certificate.chain=null sensitive=true synonyms={} + ssl.keystore.key=null sensitive=true synonyms={} + ssl.keystore.location=/var/private/ssl/server.keystore.jks sensitive=false synonyms={STATIC_BROKER_CONFIG:ssl.keystore.location=/var/private/ssl/server.keystore.jks} + ssl.keystore.password=null sensitive=true synonyms={STATIC_BROKER_CONFIG:ssl.keystore.password=null} + ssl.keystore.type=JKS sensitive=false synonyms={DEFAULT_CONFIG:ssl.keystore.type=JKS} + ssl.keystore.certificate.chain=null sensitive=true synonyms={} + ssl.keystore.key=null sensitive=true synonyms={} + ssl.keystore.location=/var/private/ssl/server.keystore.jks sensitive=false synonyms={STATIC_BROKER_CONFIG:ssl.keystore.location=/var/private/ssl/server.keystore.jks} + ssl.keystore.password=null sensitive=true synonyms={STATIC_BROKER_CONFIG:ssl.keystore.password=null} + ssl.keystore.type=JKS sensitive=false synonyms={DEFAULT_CONFIG:ssl.keystore.type=JKS} +``` + +We can see from the above output that, keystore location is `/var/private/ssl/server.keystore.jks` which means that TLS is enabled. + +## Rotate Certificate + +Now we are going to rotate the certificate of this cluster. First let's check the current expiration date of the certificate. + +```bash +$ $ kubectl exec -it -n demo kafka-prod-broker-0 -- keytool -list -v -keystore /var/private/ssl/server.keystore.jks -storepass wt6f5pwxpg84 | grep -E 'Valid from|Alias name' +Alias name: ca +Valid from: Wed Jul 31 06:11:30 UTC 2024 until: Thu Jul 31 06:11:30 UTC 2025 +Alias name: certificate +Valid from: Wed Jul 31 06:36:31 UTC 2024 until: Tue Oct 29 06:36:31 UTC 2024 +``` + +So, the certificate will expire on this time `Tue Oct 29 06:36:31 UTC 2024`. + +### Create KafkaOpsRequest + +Now we are going to increase it using a KafkaOpsRequest. Below is the yaml of the ops request that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-rotate + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: kafka-prod + tls: + rotateCertificates: true +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `kafka-prod`. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our cluster. +- `spec.tls.rotateCertificates` specifies that we want to rotate the certificate of this kafka cluster. + +Let's create the `KafkaOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/reconfigure-tls/kfops-rotate.yaml +kafkaopsrequest.ops.kubedb.com/kfops-rotate created +``` + +#### Verify Certificate Rotated Successfully + +Let's wait for `KafkaOpsRequest` to be `Successful`. Run the following command to watch `KafkaOpsRequest` CRO, + +```bash +$ kubectl get kafkaopsrequests -n demo kfops-rotate +NAME TYPE STATUS AGE +kfops-rotate ReconfigureTLS Successful 4m4s +``` + +We can see from the above output that the `KafkaOpsRequest` has succeeded. If we describe the `KafkaOpsRequest` we will get an overview of the steps that were followed. + +```bash +$ kubectl describe kafkaopsrequest -n demo kfops-rotate +Name: kfops-rotate +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: KafkaOpsRequest +Metadata: + Creation Timestamp: 2024-07-31T07:02:10Z + Generation: 1 + Resource Version: 161186 + UID: d1e6f412-3771-4963-8384-2c31bab3a057 +Spec: + Apply: IfReady + Database Ref: + Name: kafka-prod + Tls: + Rotate Certificates: true + Type: ReconfigureTLS +Status: + Conditions: + Last Transition Time: 2024-07-31T07:02:10Z + Message: Kafka ops-request has started to reconfigure tls for kafka nodes + Observed Generation: 1 + Reason: ReconfigureTLS + Status: True + Type: ReconfigureTLS + Last Transition Time: 2024-07-31T07:02:18Z + Message: get certificate; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetCertificate + Last Transition Time: 2024-07-31T07:02:18Z + Message: check ready condition; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: CheckReadyCondition + Last Transition Time: 2024-07-31T07:02:18Z + Message: issuing condition; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: IssuingCondition + Last Transition Time: 2024-07-31T07:02:18Z + Message: Successfully synced all certificates + Observed Generation: 1 + Reason: CertificateSynced + Status: True + Type: CertificateSynced + Last Transition Time: 2024-07-31T07:03:59Z + Message: successfully reconciled the Kafka with tls configuration + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2024-07-31T07:04:05Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-controller-0 + Last Transition Time: 2024-07-31T07:04:05Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-controller-0 + Last Transition Time: 2024-07-31T07:04:20Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-controller-0 + Last Transition Time: 2024-07-31T07:04:25Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-controller-1 + Last Transition Time: 2024-07-31T07:04:25Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-controller-1 + Last Transition Time: 2024-07-31T07:04:40Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-controller-1 + Last Transition Time: 2024-07-31T07:04:45Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-broker-0 + Last Transition Time: 2024-07-31T07:04:45Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-broker-0 + Last Transition Time: 2024-07-31T07:05:20Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-broker-0 + Last Transition Time: 2024-07-31T07:05:25Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-broker-1 + Last Transition Time: 2024-07-31T07:05:25Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-broker-1 + Last Transition Time: 2024-07-31T07:05:35Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-broker-1 + Last Transition Time: 2024-07-31T07:05:40Z + Message: Successfully restarted all nodes + Observed Generation: 1 + Reason: RestartNodes + Status: True + Type: RestartNodes + Last Transition Time: 2024-07-31T07:05:40Z + Message: Successfully completed reconfigureTLS for kafka. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 5m7s KubeDB Ops-manager Operator Start processing for KafkaOpsRequest: demo/kfops-rotate + Normal Starting 5m7s KubeDB Ops-manager Operator Pausing Kafka databse: demo/kafka-prod + Normal Successful 5m7s KubeDB Ops-manager Operator Successfully paused Kafka database: demo/kafka-prod for KafkaOpsRequest: kfops-rotate + Warning get certificate; ConditionStatus:True 4m59s KubeDB Ops-manager Operator get certificate; ConditionStatus:True + Warning check ready condition; ConditionStatus:True 4m59s KubeDB Ops-manager Operator check ready condition; ConditionStatus:True + Warning issuing condition; ConditionStatus:True 4m59s KubeDB Ops-manager Operator issuing condition; ConditionStatus:True + Warning get certificate; ConditionStatus:True 4m59s KubeDB Ops-manager Operator get certificate; ConditionStatus:True + Warning check ready condition; ConditionStatus:True 4m59s KubeDB Ops-manager Operator check ready condition; ConditionStatus:True + Warning issuing condition; ConditionStatus:True 4m59s KubeDB Ops-manager Operator issuing condition; ConditionStatus:True + Normal CertificateSynced 4m59s KubeDB Ops-manager Operator Successfully synced all certificates + Warning get certificate; ConditionStatus:True 4m53s KubeDB Ops-manager Operator get certificate; ConditionStatus:True + Warning check ready condition; ConditionStatus:True 4m53s KubeDB Ops-manager Operator check ready condition; ConditionStatus:True + Warning issuing condition; ConditionStatus:True 4m53s KubeDB Ops-manager Operator issuing condition; ConditionStatus:True + Warning get certificate; ConditionStatus:True 4m53s KubeDB Ops-manager Operator get certificate; ConditionStatus:True + Warning check ready condition; ConditionStatus:True 4m53s KubeDB Ops-manager Operator check ready condition; ConditionStatus:True + Warning issuing condition; ConditionStatus:True 4m53s KubeDB Ops-manager Operator issuing condition; ConditionStatus:True + Normal CertificateSynced 4m53s KubeDB Ops-manager Operator Successfully synced all certificates + Normal UpdatePetSets 3m18s KubeDB Ops-manager Operator successfully reconciled the Kafka with tls configuration + Warning get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 3m12s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 3m12s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-controller-0 3m7s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-controller-0 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 2m57s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 + Warning get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 2m52s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 2m52s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-controller-1 2m47s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-controller-1 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 2m37s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 + Warning get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 2m32s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 2m32s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-broker-0 2m27s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-broker-0 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 117s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 + Warning get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 112s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 112s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-broker-1 107s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-broker-1 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 102s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 + Normal RestartNodes 97s KubeDB Ops-manager Operator Successfully restarted all nodes + Normal Starting 97s KubeDB Ops-manager Operator Resuming Kafka database: demo/kafka-prod + Normal Successful 97s KubeDB Ops-manager Operator Successfully resumed Kafka database: demo/kafka-prod for KafkaOpsRequest: kfops-rotate +``` + +Now, let's check the expiration date of the certificate. + +```bash +$ kubectl exec -it -n demo kafka-prod-broker-0 -- keytool -list -v -keystore /var/private/ssl/server.keystore.jks -storepass wt6f5pwxpg84 | grep -E 'Valid from|Alias name' +Alias name: ca +Valid from: Wed Jul 31 06:11:30 UTC 2024 until: Thu Jul 31 06:11:30 UTC 2025 +Alias name: certificate +Valid from: Wed Jul 31 07:05:40 UTC 2024 until: Tue Oct 29 07:05:40 UTC 2024 +``` + +As we can see from the above output, the certificate has been rotated successfully. + +## Change Issuer/ClusterIssuer + +Now, we are going to change the issuer of this database. + +- Let's create a new ca certificate and key using a different subject `CN=ca-update,O=kubedb-updated`. + +```bash +$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=ca-updated/O=kubedb-updated" +Generating a RSA private key +..............................................................+++++ +......................................................................................+++++ +writing new private key to './ca.key' +----- +``` + +- Now we are going to create a new ca-secret using the certificate files that we have just generated. + +```bash +$ kubectl create secret tls kafka-new-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +secret/kafka-new-ca created +``` + +Now, Let's create a new `Issuer` using the `mongo-new-ca` secret that we have just created. The `YAML` file looks like this: + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: kf-new-issuer + namespace: demo +spec: + ca: + secretName: kafka-new-ca +``` + +Let's apply the `YAML` file: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/reconfigure-tls/kafka-new-issuer.yaml +issuer.cert-manager.io/kf-new-issuer created +``` + +### Create KafkaOpsRequest + +In order to use the new issuer to issue new certificates, we have to create a `KafkaOpsRequest` CRO with the newly created issuer. Below is the YAML of the `KafkaOpsRequest` CRO that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-update-issuer + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: kafka-prod + tls: + issuerRef: + name: kf-new-issuer + kind: Issuer + apiGroup: "cert-manager.io" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `kafka-prod` cluster. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our kafka. +- `spec.tls.issuerRef` specifies the issuer name, kind and api group. + +Let's create the `KafkaOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/reconfigure-tls/kafka-update-tls-issuer.yaml +kafkapsrequest.ops.kubedb.com/kfops-update-issuer created +``` + +#### Verify Issuer is changed successfully + +Let's wait for `KafkaOpsRequest` to be `Successful`. Run the following command to watch `KafkaOpsRequest` CRO, + +```bash +$ kubectl get kafkaopsrequests -n demo kfops-update-issuer +NAME TYPE STATUS AGE +kfops-update-issuer ReconfigureTLS Successful 8m6s +``` + +We can see from the above output that the `KafkaOpsRequest` has succeeded. If we describe the `KafkaOpsRequest` we will get an overview of the steps that were followed. + +```bash +$ kubectl describe kafkaopsrequest -n demo kfops-update-issuer +Name: kfops-update-issuer +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: KafkaOpsRequest +Metadata: + Creation Timestamp: 2024-07-31T07:33:37Z + Generation: 1 + Resource Version: 163574 + UID: d81c7a63-199b-4c45-b9c0-a4a93fed3c10 +Spec: + Apply: IfReady + Database Ref: + Name: kafka-prod + Tls: + Issuer Ref: + API Group: cert-manager.io + Kind: Issuer + Name: kf-new-issuer + Type: ReconfigureTLS +Status: + Conditions: + Last Transition Time: 2024-07-31T07:33:37Z + Message: Kafka ops-request has started to reconfigure tls for kafka nodes + Observed Generation: 1 + Reason: ReconfigureTLS + Status: True + Type: ReconfigureTLS + Last Transition Time: 2024-07-31T07:33:43Z + Message: get certificate; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetCertificate + Last Transition Time: 2024-07-31T07:33:43Z + Message: check ready condition; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: CheckReadyCondition + Last Transition Time: 2024-07-31T07:33:44Z + Message: issuing condition; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: IssuingCondition + Last Transition Time: 2024-07-31T07:33:44Z + Message: Successfully synced all certificates + Observed Generation: 1 + Reason: CertificateSynced + Status: True + Type: CertificateSynced + Last Transition Time: 2024-07-31T07:35:49Z + Message: successfully reconciled the Kafka with tls configuration + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2024-07-31T07:35:54Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-controller-0 + Last Transition Time: 2024-07-31T07:35:54Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-controller-0 + Last Transition Time: 2024-07-31T07:36:09Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-controller-0 + Last Transition Time: 2024-07-31T07:36:14Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-controller-1 + Last Transition Time: 2024-07-31T07:36:14Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-controller-1 + Last Transition Time: 2024-07-31T07:36:34Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-controller-1 + Last Transition Time: 2024-07-31T07:36:39Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-broker-0 + Last Transition Time: 2024-07-31T07:36:39Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-broker-0 + Last Transition Time: 2024-07-31T07:37:19Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-broker-0 + Last Transition Time: 2024-07-31T07:37:24Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-broker-1 + Last Transition Time: 2024-07-31T07:37:24Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-broker-1 + Last Transition Time: 2024-07-31T07:38:04Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-broker-1 + Last Transition Time: 2024-07-31T07:38:09Z + Message: Successfully restarted all nodes + Observed Generation: 1 + Reason: RestartNodes + Status: True + Type: RestartNodes + Last Transition Time: 2024-07-31T07:38:09Z + Message: Successfully completed reconfigureTLS for kafka. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: +``` + +Now, Let's exec into a kafka node and find out the ca subject to see if it matches the one we have provided. + +```bash +$ kubectl exec -it kafka-prod-broker-0 -- bash +kafka@kafka-prod-broker-0:~$ keytool -list -v -keystore /var/private/ssl/server.keystore.jks -storepass wt6f5pwxpg84 | grep 'Issuer' +Issuer: O=kubedb-updated, CN=ca-updated +Issuer: O=kubedb-updated, CN=ca-updated +``` + +We can see from the above output that, the subject name matches the subject name of the new ca certificate that we have created. So, the issuer is changed successfully. + +## Remove TLS from the Database + +Now, we are going to remove TLS from this database using a KafkaOpsRequest. + +### Create KafkaOpsRequest + +Below is the YAML of the `KafkaOpsRequest` CRO that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-remove + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: kafka-prod + tls: + remove: true +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `kafka-prod` cluster. +- `spec.type` specifies that we are performing `ReconfigureTLS` on Kafka. +- `spec.tls.remove` specifies that we want to remove tls from this cluster. + +Let's create the `KafkaOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/reconfigure-tls/kfops-remove.yaml +kafkaopsrequest.ops.kubedb.com/kfops-remove created +``` + +#### Verify TLS Removed Successfully + +Let's wait for `KafkaOpsRequest` to be `Successful`. Run the following command to watch `KafkaOpsRequest` CRO, + +```bash +$ kubectl get kafkaopsrequest -n demo kfops-remove +NAME TYPE STATUS AGE +kfops-remove ReconfigureTLS Successful 105s +``` + +We can see from the above output that the `KafkaOpsRequest` has succeeded. If we describe the `KafkaOpsRequest` we will get an overview of the steps that were followed. + +```bash +$ kubectl describe kafkaopsrequest -n demo kfops-remove +Name: kfops-remove +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: KafkaOpsRequest +Metadata: + Creation Timestamp: 2024-07-31T09:34:09Z + Generation: 1 + Resource Version: 171329 + UID: c21b5c15-8fc0-43b5-9b46-6d1a98c9422d +Spec: + Apply: IfReady + Database Ref: + Name: kafka-prod + Tls: + Remove: true + Type: ReconfigureTLS +Status: + Conditions: + Last Transition Time: 2024-07-31T09:34:09Z + Message: Kafka ops-request has started to reconfigure tls for kafka nodes + Observed Generation: 1 + Reason: ReconfigureTLS + Status: True + Type: ReconfigureTLS + Last Transition Time: 2024-07-31T09:34:17Z + Message: successfully reconciled the Kafka with tls configuration + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2024-07-31T09:34:22Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-controller-0 + Last Transition Time: 2024-07-31T09:34:22Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-controller-0 + Last Transition Time: 2024-07-31T09:34:32Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-controller-0 + Last Transition Time: 2024-07-31T09:34:37Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-controller-1 + Last Transition Time: 2024-07-31T09:34:37Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-controller-1 + Last Transition Time: 2024-07-31T09:34:47Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-controller-1 + Last Transition Time: 2024-07-31T09:34:52Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-broker-0 + Last Transition Time: 2024-07-31T09:34:52Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-broker-0 + Last Transition Time: 2024-07-31T09:35:32Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-broker-0 + Last Transition Time: 2024-07-31T09:35:37Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-broker-1 + Last Transition Time: 2024-07-31T09:35:37Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-broker-1 + Last Transition Time: 2024-07-31T09:38:47Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-broker-1 + Last Transition Time: 2024-07-31T09:38:52Z + Message: Successfully restarted all nodes + Observed Generation: 1 + Reason: RestartNodes + Status: True + Type: RestartNodes + Last Transition Time: 2024-07-31T09:38:52Z + Message: Successfully completed reconfigureTLS for kafka. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: +``` + +Now, Let's exec into one of the broker node and find out that TLS is disabled or not. + +```bash +$$ kubectl exec -it -n demo kafka-prod-broker-0 -- kafka-configs.sh --bootstrap-server localhost:9092 --command-config /opt/kafka/config/clientauth.properties --describe --entity-type brokers --all | grep 'ssl.keystore' + ssl.keystore.certificate.chain=null sensitive=true synonyms={} + ssl.keystore.key=null sensitive=true synonyms={} + ssl.keystore.location=null sensitive=false synonyms={} + ssl.keystore.password=null sensitive=true synonyms={} + ssl.keystore.type=JKS sensitive=false synonyms={DEFAULT_CONFIG:ssl.keystore.type=JKS} + ssl.keystore.certificate.chain=null sensitive=true synonyms={} + ssl.keystore.key=null sensitive=true synonyms={} + ssl.keystore.location=null sensitive=false synonyms={} + ssl.keystore.password=null sensitive=true synonyms={} +``` + +So, we can see from the above that, output that tls is disabled successfully. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete opsrequest kfops-add-tls kfops-remove kfops-rotate kfops-update-issuer +kubectl delete kafka -n demo kafka-prod +kubectl delete issuer -n demo kf-issuer kf-new-issuer +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [Kafka object](/docs/guides/kafka/concepts/kafka.md). +- Different Kafka topology clustering modes [here](/docs/guides/kafka/clustering/_index.md). +- Monitor your Kafka database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). + +[//]: # (- Monitor your Kafka database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/kafka/monitoring/using-builtin-prometheus.md).) +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). + diff --git a/docs/guides/kafka/reconfigure-tls/overview.md b/docs/guides/kafka/reconfigure-tls/overview.md new file mode 100644 index 0000000000..f309b45d3a --- /dev/null +++ b/docs/guides/kafka/reconfigure-tls/overview.md @@ -0,0 +1,54 @@ +--- +title: Reconfiguring TLS/SSL +menu: + docs_{{ .version }}: + identifier: kf-reconfigure-tls-overview + name: Overview + parent: kf-reconfigure-tls + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Reconfiguring TLS of Kafka + +This guide will give an overview on how KubeDB Ops-manager operator reconfigures TLS configuration i.e. add TLS, remove TLS, update issuer/cluster issuer or Certificates and rotate the certificates of `Kafka`. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [Kafka](/docs/guides/kafka/concepts/kafka.md) + - [KafkaOpsRequest](/docs/guides/kafka/concepts/kafkaopsrequest.md) + +## How Reconfiguring Kafka TLS Configuration Process Works + +The following diagram shows how KubeDB Ops-manager operator reconfigures TLS of a `Kafka`. Open the image in a new tab to see the enlarged version. + +
+  Reconfiguring TLS process of Kafka +
Fig: Reconfiguring TLS process of Kafka
+
+ +The Reconfiguring Kafka TLS process consists of the following steps: + +1. At first, a user creates a `Kafka` Custom Resource Object (CRO). + +2. `KubeDB` Provisioner operator watches the `Kafka` CRO. + +3. When the operator finds a `Kafka` CR, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to reconfigure the TLS configuration of the `Kafka` database the user creates a `KafkaOpsRequest` CR with desired information. + +5. `KubeDB` Ops-manager operator watches the `KafkaOpsRequest` CR. + +6. When it finds a `KafkaOpsRequest` CR, it pauses the `Kafka` object which is referred from the `KafkaOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `Kafka` object during the reconfiguring TLS process. + +7. Then the `KubeDB` Ops-manager operator will add, remove, update or rotate TLS configuration based on the Ops Request yaml. + +8. Then the `KubeDB` Ops-manager operator will restart all the Pods of the database so that they restart with the new TLS configuration defined in the `KafkaOpsRequest` CR. + +9. After the successful reconfiguring of the `Kafka` TLS, the `KubeDB` Ops-manager operator resumes the `Kafka` object so that the `KubeDB` Provisioner operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on reconfiguring TLS configuration of a Kafka database using `KafkaOpsRequest` CRD. \ No newline at end of file diff --git a/docs/guides/kafka/reconfigure/_index.md b/docs/guides/kafka/reconfigure/_index.md new file mode 100644 index 0000000000..86d8888b6b --- /dev/null +++ b/docs/guides/kafka/reconfigure/_index.md @@ -0,0 +1,10 @@ +--- +title: Reconfigure +menu: + docs_{{ .version }}: + identifier: kf-reconfigure + name: Reconfigure + parent: kf-kafka-guides + weight: 46 +menu_name: docs_{{ .version }} +--- \ No newline at end of file diff --git a/docs/guides/kafka/reconfigure/kafka-combined.md b/docs/guides/kafka/reconfigure/kafka-combined.md new file mode 100644 index 0000000000..d209dea624 --- /dev/null +++ b/docs/guides/kafka/reconfigure/kafka-combined.md @@ -0,0 +1,506 @@ +--- +title: Reconfigure Kafka Combined +menu: + docs_{{ .version }}: + identifier: kf-reconfigure-combined + name: Combined + parent: kf-reconfigure + weight: 30 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Reconfigure Kafka Combined Cluster + +This guide will show you how to use `KubeDB` Ops-manager operator to reconfigure a Kafka Combined cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [Kafka](/docs/guides/kafka/concepts/kafka.md) + - [Combined](/docs/guides/kafka/clustering/combined-cluster/index.md) + - [KafkaOpsRequest](/docs/guides/kafka/concepts/kafkaopsrequest.md) + - [Reconfigure Overview](/docs/guides/kafka/reconfigure/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/kafka](/docs/examples/kafka) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +Now, we are going to deploy a `Kafka` Combined cluster using a supported version by `KubeDB` operator. Then we are going to apply `KafkaOpsRequest` to reconfigure its configuration. + +### Prepare Kafka Combined Cluster + +Now, we are going to deploy a `Kafka` combined cluster with version `3.6.1`. + +### Deploy Kafka + +At first, we will create a secret with the `server.properties` file containing required configuration settings. + +**server.properties:** + +```properties +log.retention.hours=100 +``` +Here, `log.retention.hours` is set to `100`, whereas the default value is `168`. + +Let's create a k8s secret containing the above configuration where the file name will be the key and the file-content as the value: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: kf-combined-custom-config + namespace: demo +stringData: + server.properties: |- + log.retention.hours=100 +``` + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/reconfigure/kafka-combined-custom-config.yaml +secret/kf-combined-custom-config created +``` + +In this section, we are going to create a Kafka object specifying `spec.configSecret` field to apply this custom configuration. Below is the YAML of the `Kafka` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-dev + namespace: demo +spec: + replicas: 2 + version: 3.6.1 + configSecret: + name: kf-combined-custom-config + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut +``` + +Let's create the `Kafka` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/reconfigure/kafka-combined.yaml +kafka.kubedb.com/kafka-dev created +``` + +Now, wait until `kafka-dev` has status `Ready`. i.e, + +```bash +$ kubectl get kf -n demo -w +NAME TYPE VERSION STATUS AGE +kafka-dev kubedb.com/v1 3.6.1 Provisioning 0s +kafka-dev kubedb.com/v1 3.6.1 Provisioning 24s +. +. +kafka-dev kubedb.com/v1 3.6.1 Ready 92s +``` + +Now, we will check if the kafka has started with the custom configuration we have provided. + +Exec into the Kafka pod and execute the following commands to see the configurations: +```bash +$ kubectl exec -it -n demo kafka-dev-0 -- bash +kafka@kafka-dev-0:~$ kafka-configs.sh --bootstrap-server localhost:9092 --command-config /opt/kafka/config/clientauth.properties --describe --entity-type brokers --all | grep log.retention.hours + log.retention.hours=100 sensitive=false synonyms={STATIC_BROKER_CONFIG:log.retention.hours=100, DEFAULT_CONFIG:log.retention.hours=168} + log.retention.hours=100 sensitive=false synonyms={STATIC_BROKER_CONFIG:log.retention.hours=100, DEFAULT_CONFIG:log.retention.hours=168} +``` +Here, we can see that our given configuration is applied to the Kafka cluster for all brokers. `log.retention.hours` is set to `100` from the default value `168`. + +### Reconfigure using new config secret + +Now we will reconfigure this cluster to set `log.retention.hours` to `125`. + +Now, update our `server.properties` file with the new configuration. + +**server.properties:** + +```properties +log.retention.hours=125 +``` + +Then, we will create a new secret with this configuration file. + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: new-kf-combined-custom-config + namespace: demo +stringData: + server.properties: |- + log.retention.hours=125 +``` + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/reconfigure/new-kafka-combined-custom-config.yaml +secret/new-kf-combined-custom-config created +``` + +#### Create KafkaOpsRequest + +Now, we will use this secret to replace the previous secret using a `KafkaOpsRequest` CR. The `KafkaOpsRequest` yaml is given below, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-reconfigure-combined + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: kafka-dev + configuration: + configSecret: + name: new-kf-combined-custom-config + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are reconfiguring `kafka-dev` database. +- `spec.type` specifies that we are performing `Reconfigure` on our database. +- `spec.configSecret.name` specifies the name of the new secret. + +Let's create the `KafkaOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/reconfigure/kafka-reconfigure-update-combined.yaml +kafkaopsrequest.ops.kubedb.com/kfops-reconfigure-combined created +``` + +#### Verify the new configuration is working + +If everything goes well, `KubeDB` Ops-manager operator will update the `configSecret` of `Kafka` object. + +Let's wait for `KafkaOpsRequest` to be `Successful`. Run the following command to watch `KafkaOpsRequest` CR, + +```bash +$ kubectl get kafkaopsrequests -n demo +NAME TYPE STATUS AGE +kfops-reconfigure-combined Reconfigure Successful 4m55s +``` + +We can see from the above output that the `KafkaOpsRequest` has succeeded. If we describe the `KafkaOpsRequest` we will get an overview of the steps that were followed to reconfigure the database. + +```bash +$ kubectl describe kafkaopsrequest -n demo kfops-reconfigure-combined +Name: kfops-reconfigure-combined +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: KafkaOpsRequest +Metadata: + Creation Timestamp: 2024-08-01T09:14:46Z + Generation: 1 + Resource Version: 258361 + UID: ac2147ba-51cf-4ebf-8328-76253379108c +Spec: + Apply: IfReady + Configuration: + Config Secret: + Name: new-kf-combined-custom-config + Database Ref: + Name: kafka-dev + Timeout: 5m + Type: Reconfigure +Status: + Conditions: + Last Transition Time: 2024-08-01T09:14:46Z + Message: Kafka ops-request has started to reconfigure kafka nodes + Observed Generation: 1 + Reason: Reconfigure + Status: True + Type: Reconfigure + Last Transition Time: 2024-08-01T09:14:55Z + Message: successfully reconciled the Kafka with new configure + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2024-08-01T09:15:00Z + Message: get pod; ConditionStatus:True; PodName:kafka-dev-0 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-dev-0 + Last Transition Time: 2024-08-01T09:15:00Z + Message: evict pod; ConditionStatus:True; PodName:kafka-dev-0 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-dev-0 + Last Transition Time: 2024-08-01T09:16:15Z + Message: check pod running; ConditionStatus:True; PodName:kafka-dev-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-dev-0 + Last Transition Time: 2024-08-01T09:16:20Z + Message: get pod; ConditionStatus:True; PodName:kafka-dev-1 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-dev-1 + Last Transition Time: 2024-08-01T09:16:20Z + Message: evict pod; ConditionStatus:True; PodName:kafka-dev-1 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-dev-1 + Last Transition Time: 2024-08-01T09:17:20Z + Message: check pod running; ConditionStatus:True; PodName:kafka-dev-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-dev-1 + Last Transition Time: 2024-08-01T09:17:25Z + Message: Successfully restarted all nodes + Observed Generation: 1 + Reason: RestartNodes + Status: True + Type: RestartNodes + Last Transition Time: 2024-08-01T09:17:25Z + Message: Successfully completed reconfigure kafka + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 5m32s KubeDB Ops-manager Operator Start processing for KafkaOpsRequest: demo/kfops-reconfigure-combined + Normal Starting 5m32s KubeDB Ops-manager Operator Pausing Kafka databse: demo/kafka-dev + Normal Successful 5m32s KubeDB Ops-manager Operator Successfully paused Kafka database: demo/kafka-dev for KafkaOpsRequest: kfops-reconfigure-combined + Normal UpdatePetSets 5m23s KubeDB Ops-manager Operator successfully reconciled the Kafka with new configure + Warning get pod; ConditionStatus:True; PodName:kafka-dev-0 5m18s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-dev-0 + Warning evict pod; ConditionStatus:True; PodName:kafka-dev-0 5m18s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-dev-0 + Warning check pod running; ConditionStatus:False; PodName:kafka-dev-0 5m13s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-dev-0 + Warning check pod running; ConditionStatus:True; PodName:kafka-dev-0 4m3s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-dev-0 + Warning get pod; ConditionStatus:True; PodName:kafka-dev-1 3m58s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-dev-1 + Warning evict pod; ConditionStatus:True; PodName:kafka-dev-1 3m58s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-dev-1 + Warning check pod running; ConditionStatus:False; PodName:kafka-dev-1 3m53s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-dev-1 + Warning check pod running; ConditionStatus:True; PodName:kafka-dev-1 2m58s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-dev-1 + Normal RestartNodes 2m53s KubeDB Ops-manager Operator Successfully restarted all nodes + Normal Starting 2m53s KubeDB Ops-manager Operator Resuming Kafka database: demo/kafka-dev + Normal Successful 2m53s KubeDB Ops-manager Operator Successfully resumed Kafka database: demo/kafka-dev for KafkaOpsRequest: kfops-reconfigure-combined +``` + +Now let's exec one of the instance and run a kafka-configs.sh command to check the new configuration we have provided. + +```bash +$ kubectl exec -it -n demo kafka-dev-0 -- kafka-configs.sh --bootstrap-server localhost:9092 --command-config /opt/kafka/config/clientauth.properties --describe --entity-type brokers --all | grep 'log.retention.hours' + log.retention.hours=125 sensitive=false synonyms={STATIC_BROKER_CONFIG:log.retention.hours=125, DEFAULT_CONFIG:log.retention.hours=168} + log.retention.hours=125 sensitive=false synonyms={STATIC_BROKER_CONFIG:log.retention.hours=125, DEFAULT_CONFIG:log.retention.hours=168} +``` + +As we can see from the configuration of ready kafka, the value of `log.retention.hours` has been changed from `100` to `125`. So the reconfiguration of the cluster is successful. + + +### Reconfigure using apply config + +Now we will reconfigure this cluster again to set `log.retention.hours` to `150`. This time we won't use a new secret. We will use the `applyConfig` field of the `KafkaOpsRequest`. This will merge the new config in the existing secret. + +#### Create KafkaOpsRequest + +Now, we will use the new configuration in the `applyConfig` field in the `KafkaOpsRequest` CR. The `KafkaOpsRequest` yaml is given below, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-reconfigure-apply-combined + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: kafka-dev + configuration: + applyConfig: + server.properties: |- + log.retention.hours=150 + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are reconfiguring `kafka-dev` cluster. +- `spec.type` specifies that we are performing `Reconfigure` on kafka. +- `spec.configuration.applyConfig` specifies the new configuration that will be merged in the existing secret. + +Let's create the `KafkaOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/reconfigure/kafka-reconfigure-apply-combined.yaml +kafkaopsrequest.ops.kubedb.com/kfops-reconfigure-apply-combined created +``` + +#### Verify the new configuration is working + +If everything goes well, `KubeDB` Ops-manager operator will merge this new config with the existing configuration. + +Let's wait for `KafkaOpsRequest` to be `Successful`. Run the following command to watch `KafkaOpsRequest` CR, + +```bash +$ kubectl get kafkaopsrequests -n demo kfops-reconfigure-apply-combined +NAME TYPE STATUS AGE +kfops-reconfigure-apply-combined Reconfigure Successful 55s +``` + +We can see from the above output that the `KafkaOpsRequest` has succeeded. If we describe the `KafkaOpsRequest` we will get an overview of the steps that were followed to reconfigure the cluster. + +```bash +$ kubectl describe kafkaopsrequest -n demo kfops-reconfigure-apply-combined +Name: kfops-reconfigure-apply-combined +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: KafkaOpsRequest +Metadata: + Creation Timestamp: 2024-08-01T09:27:03Z + Generation: 1 + Resource Version: 259123 + UID: fdc46ef0-e2ae-490a-aab8-6a3380ec09d1 +Spec: + Apply: IfReady + Configuration: + Apply Config: + server.properties: log.retention.hours=150 + Database Ref: + Name: kafka-dev + Timeout: 5m + Type: Reconfigure +Status: + Conditions: + Last Transition Time: 2024-08-01T09:27:03Z + Message: Kafka ops-request has started to reconfigure kafka nodes + Observed Generation: 1 + Reason: Reconfigure + Status: True + Type: Reconfigure + Last Transition Time: 2024-08-01T09:27:06Z + Message: Successfully prepared user provided custom config secret + Observed Generation: 1 + Reason: PrepareCustomConfig + Status: True + Type: PrepareCustomConfig + Last Transition Time: 2024-08-01T09:27:12Z + Message: successfully reconciled the Kafka with new configure + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2024-08-01T09:27:17Z + Message: get pod; ConditionStatus:True; PodName:kafka-dev-0 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-dev-0 + Last Transition Time: 2024-08-01T09:27:17Z + Message: evict pod; ConditionStatus:True; PodName:kafka-dev-0 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-dev-0 + Last Transition Time: 2024-08-01T09:27:27Z + Message: check pod running; ConditionStatus:True; PodName:kafka-dev-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-dev-0 + Last Transition Time: 2024-08-01T09:27:32Z + Message: get pod; ConditionStatus:True; PodName:kafka-dev-1 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-dev-1 + Last Transition Time: 2024-08-01T09:27:32Z + Message: evict pod; ConditionStatus:True; PodName:kafka-dev-1 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-dev-1 + Last Transition Time: 2024-08-01T09:27:52Z + Message: check pod running; ConditionStatus:True; PodName:kafka-dev-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-dev-1 + Last Transition Time: 2024-08-01T09:27:57Z + Message: Successfully restarted all nodes + Observed Generation: 1 + Reason: RestartNodes + Status: True + Type: RestartNodes + Last Transition Time: 2024-08-01T09:27:57Z + Message: Successfully completed reconfigure kafka + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 2m7s KubeDB Ops-manager Operator Start processing for KafkaOpsRequest: demo/kfops-reconfigure-apply-combined + Normal Starting 2m7s KubeDB Ops-manager Operator Pausing Kafka databse: demo/kafka-dev + Normal Successful 2m7s KubeDB Ops-manager Operator Successfully paused Kafka database: demo/kafka-dev for KafkaOpsRequest: kfops-reconfigure-apply-combined + Normal UpdatePetSets 118s KubeDB Ops-manager Operator successfully reconciled the Kafka with new configure + Warning get pod; ConditionStatus:True; PodName:kafka-dev-0 113s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-dev-0 + Warning evict pod; ConditionStatus:True; PodName:kafka-dev-0 113s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-dev-0 + Warning check pod running; ConditionStatus:False; PodName:kafka-dev-0 108s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-dev-0 + Warning check pod running; ConditionStatus:True; PodName:kafka-dev-0 103s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-dev-0 + Warning get pod; ConditionStatus:True; PodName:kafka-dev-1 98s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-dev-1 + Warning evict pod; ConditionStatus:True; PodName:kafka-dev-1 98s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-dev-1 + Warning check pod running; ConditionStatus:False; PodName:kafka-dev-1 93s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-dev-1 + Warning check pod running; ConditionStatus:True; PodName:kafka-dev-1 78s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-dev-1 + Normal RestartNodes 73s KubeDB Ops-manager Operator Successfully restarted all nodes + Normal Starting 73s KubeDB Ops-manager Operator Resuming Kafka database: demo/kafka-dev + Normal Successful 73s KubeDB Ops-manager Operator Successfully resumed Kafka database: demo/kafka-dev for KafkaOpsRequest: kfops-reconfigure-apply-combined +``` + +Now let's exec into one of the instance and run a `kafka-configs.sh` command to check the new configuration we have provided. + +```bash +$ kubectl exec -it -n demo kafka-dev-0 -- kafka-configs.sh --bootstrap-server localhost:9092 --command-config /opt/kafka/config/clientauth.properties --describe --entity-type brokers --all | grep 'log.retention.hours' + log.retention.hours=150 sensitive=false synonyms={STATIC_BROKER_CONFIG:log.retention.hours=150, DEFAULT_CONFIG:log.retention.hours=168} + log.retention.hours=150 sensitive=false synonyms={STATIC_BROKER_CONFIG:log.retention.hours=150, DEFAULT_CONFIG:log.retention.hours=168} +``` + +As we can see from the configuration of ready kafka, the value of `log.retention.hours` has been changed from `125` to `150`. So the reconfiguration of the database using the `applyConfig` field is successful. + + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete kf -n demo kafka-dev +kubectl delete kafkaopsrequest -n demo kfops-reconfigure-apply-combined kfops-reconfigure-combined +kubectl delete secret -n demo kf-combined-custom-config new-kf-combined-custom-config +kubectl delete namespace demo +``` + +## Next Steps + +- Detail concepts of [Kafka object](/docs/guides/kafka/concepts/kafka.md). +- Different Kafka topology clustering modes [here](/docs/guides/kafka/clustering/_index.md). +- Monitor your Kafka database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). + +[//]: # (- Monitor your Kafka database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/kafka/monitoring/using-builtin-prometheus.md).) +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/kafka/reconfigure/kafka-topology.md b/docs/guides/kafka/reconfigure/kafka-topology.md new file mode 100644 index 0000000000..b9167a1e77 --- /dev/null +++ b/docs/guides/kafka/reconfigure/kafka-topology.md @@ -0,0 +1,625 @@ +--- +title: Reconfigure Kafka Topology +menu: + docs_{{ .version }}: + identifier: kf-reconfigure-topology + name: Topology + parent: kf-reconfigure + weight: 30 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Reconfigure Kafka Topology Cluster + +This guide will show you how to use `KubeDB` Ops-manager operator to reconfigure a Kafka Topology cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [Kafka](/docs/guides/kafka/concepts/kafka.md) + - [Topology](/docs/guides/kafka/clustering/topology-cluster/index.md) + - [KafkaOpsRequest](/docs/guides/kafka/concepts/kafkaopsrequest.md) + - [Reconfigure Overview](/docs/guides/kafka/reconfigure/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/kafka](/docs/examples/kafka) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +Now, we are going to deploy a `Kafka` Topology cluster using a supported version by `KubeDB` operator. Then we are going to apply `KafkaOpsRequest` to reconfigure its configuration. + +### Prepare Kafka Topology Cluster + +Now, we are going to deploy a `Kafka` topology cluster with version `3.6.1`. + +### Deploy Kafka + +At first, we will create a secret with the `broker.properties` and `controller.properties` file containing required configuration settings. + +**broker.properties:** + +```properties +log.retention.hours=100 +``` + +**controller.properties:** + +```properties +controller.quorum.election.timeout.ms=2000 +``` + +Here, `log.retention.hours` is set to `100`, whereas the default value is `168` for broker and `controller.quorum.election.timeout.ms` is set to `2000` for controller. + +Let's create a k8s secret containing the above configuration where the file name will be the key and the file-content as the value: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: kf-topology-custom-config + namespace: demo +stringData: + broker.properties: |- + log.retention.hours=100 + controller.properties: |- + controller.quorum.election.timeout.ms=2000 +``` + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/reconfigure/kafka-topology-custom-config.yaml +secret/kf-topology-custom-config created +``` + +> **Note:** + +In this section, we are going to create a Kafka object specifying `spec.configSecret` field to apply this custom configuration. Below is the YAML of the `Kafka` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-prod + namespace: demo +spec: + version: 3.6.1 + configSecret: + name: kf-topology-custom-config + topology: + broker: + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + controller: + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut +``` + +Let's create the `Kafka` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/reconfigure/kafka-topology.yaml +kafka.kubedb.com/kafka-prod created +``` + +Now, wait until `kafka-prod` has status `Ready`. i.e, + +```bash +$ kubectl get kf -n demo -w +NAME TYPE VERSION STATUS AGE +kafka-prod kubedb.com/v1 3.6.1 Provisioning 0s +kafka-prod kubedb.com/v1 3.6.1 Provisioning 24s +. +. +kafka-prod kubedb.com/v1 3.6.1 Ready 92s +``` + +Now, we will check if the kafka has started with the custom configuration we have provided. + +Exec into the Kafka pod and execute the following commands to see the configurations: +```bash +$ kubectl exec -it -n demo kafka-prod-broker-0 -- bash +kafka@kafka-prod-broker-0:~$ kafka-configs.sh --bootstrap-server localhost:9092 --command-config /opt/kafka/config/clientauth.properties --describe --entity-type brokers --all | grep log.retention.hours + log.retention.hours=100 sensitive=false synonyms={STATIC_BROKER_CONFIG:log.retention.hours=100, DEFAULT_CONFIG:log.retention.hours=168} + log.retention.hours=100 sensitive=false synonyms={STATIC_BROKER_CONFIG:log.retention.hours=100, DEFAULT_CONFIG:log.retention.hours=168} +``` +Here, we can see that our given configuration is applied to the Kafka cluster for all brokers. `log.retention.hours` is set to `100` from the default value `168`. + +### Reconfigure using new config secret + +Now we will reconfigure this cluster to set `log.retention.hours` to `125`. + +Now, update our `broker.properties` and `controller.properties` file with the new configuration. + +**broker.properties:** + +```properties +log.retention.hours=125 +``` + +**controller.properties:** + +```properties +controller.quorum.election.timeout.ms=3000 +controller.quorum.fetch.timeout.ms=4000 +``` + +Then, we will create a new secret with this configuration file. + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: new-kf-topology-custom-config + namespace: demo +stringData: + broker.properties: |- + log.retention.hours=125 + controller.properties: |- + controller.quorum.election.timeout.ms=3000 + controller.quorum.fetch.timeout.ms=4000 +``` + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/reconfigure/new-kafka-topology-custom-config.yaml +secret/new-kf-topology-custom-config created +``` + +#### Create KafkaOpsRequest + +Now, we will use this secret to replace the previous secret using a `KafkaOpsRequest` CR. The `KafkaOpsRequest` yaml is given below, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-reconfigure-topology + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: kafka-prod + configuration: + configSecret: + name: new-kf-topology-custom-config + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are reconfiguring `kafka-prod` database. +- `spec.type` specifies that we are performing `Reconfigure` on our database. +- `spec.configSecret.name` specifies the name of the new secret. + +Let's create the `KafkaOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/reconfigure/kafka-reconfigure-update-topology.yaml +kafkaopsrequest.ops.kubedb.com/kfops-reconfigure-topology created +``` + +#### Verify the new configuration is working + +If everything goes well, `KubeDB` Ops-manager operator will update the `configSecret` of `Kafka` object. + +Let's wait for `KafkaOpsRequest` to be `Successful`. Run the following command to watch `KafkaOpsRequest` CR, + +```bash +$ kubectl get kafkaopsrequests -n demo +NAME TYPE STATUS AGE +kfops-reconfigure-topology Reconfigure Successful 4m55s +``` + +We can see from the above output that the `KafkaOpsRequest` has succeeded. If we describe the `KafkaOpsRequest` we will get an overview of the steps that were followed to reconfigure the database. + +```bash +$ kubectl describe kafkaopsrequest -n demo kfops-reconfigure-topology +Name: kfops-reconfigure-topology +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: KafkaOpsRequest +Metadata: + Creation Timestamp: 2024-08-02T05:08:37Z + Generation: 1 + Resource Version: 332491 + UID: b6e8cb1b-d29f-445e-bb01-60d29012c7eb +Spec: + Apply: IfReady + Configuration: + Config Secret: + Name: new-kf-topology-custom-config + Database Ref: + Name: kafka-prod + Timeout: 5m + Type: Reconfigure +Status: + Conditions: + Last Transition Time: 2024-08-02T05:08:37Z + Message: Kafka ops-request has started to reconfigure kafka nodes + Observed Generation: 1 + Reason: Reconfigure + Status: True + Type: Reconfigure + Last Transition Time: 2024-08-02T05:08:45Z + Message: check reconcile; ConditionStatus:False + Observed Generation: 1 + Status: False + Type: CheckReconcile + Last Transition Time: 2024-08-02T05:09:42Z + Message: successfully reconciled the Kafka with new configure + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2024-08-02T05:09:47Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-controller-0 + Last Transition Time: 2024-08-02T05:09:47Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-controller-0 + Last Transition Time: 2024-08-02T05:10:02Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-controller-0 + Last Transition Time: 2024-08-02T05:10:07Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-controller-1 + Last Transition Time: 2024-08-02T05:10:07Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-controller-1 + Last Transition Time: 2024-08-02T05:10:22Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-controller-1 + Last Transition Time: 2024-08-02T05:10:27Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-broker-0 + Last Transition Time: 2024-08-02T05:10:27Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-broker-0 + Last Transition Time: 2024-08-02T05:11:12Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-broker-0 + Last Transition Time: 2024-08-02T05:11:17Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-broker-1 + Last Transition Time: 2024-08-02T05:11:17Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-broker-1 + Last Transition Time: 2024-08-02T05:11:32Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-broker-1 + Last Transition Time: 2024-08-02T05:11:37Z + Message: Successfully restarted all nodes + Observed Generation: 1 + Reason: RestartNodes + Status: True + Type: RestartNodes + Last Transition Time: 2024-08-02T05:11:39Z + Message: Successfully completed reconfigure kafka + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 3m7s KubeDB Ops-manager Operator Start processing for KafkaOpsRequest: demo/kfops-reconfigure-topology + Normal Starting 3m7s KubeDB Ops-manager Operator Pausing Kafka databse: demo/kafka-prod + Normal Successful 3m7s KubeDB Ops-manager Operator Successfully paused Kafka database: demo/kafka-prod for KafkaOpsRequest: kfops-reconfigure-topology + Warning check reconcile; ConditionStatus:False 2m59s KubeDB Ops-manager Operator check reconcile; ConditionStatus:False + Normal UpdatePetSets 2m2s KubeDB Ops-manager Operator successfully reconciled the Kafka with new configure + Warning get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 117s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 117s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-controller-0 112s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-controller-0 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 102s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 + Warning get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 97s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 97s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-controller-1 92s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-controller-1 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 82s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 + Warning get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 77s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 77s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-broker-0 72s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-broker-0 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 32s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 + Warning get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 27s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 27s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-broker-1 22s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-broker-1 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 12s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 + Normal RestartNodes 7s KubeDB Ops-manager Operator Successfully restarted all nodes + Normal Starting 5s KubeDB Ops-manager Operator Resuming Kafka database: demo/kafka-prod + Normal Successful 5s KubeDB Ops-manager Operator Successfully resumed Kafka database: demo/kafka-prod for KafkaOpsRequest: kfops-reconfigure-topology +``` + +Now let's exec one of the instance and run a kafka-configs.sh command to check the new configuration we have provided. + +```bash +$ kubectl exec -it -n demo kafka-prod-broker-0 -- kafka-configs.sh --bootstrap-server localhost:9092 --command-config /opt/kafka/config/clientauth.properties --describe --entity-type brokers --all | grep 'log.retention.hours' + log.retention.hours=125 sensitive=false synonyms={STATIC_BROKER_CONFIG:log.retention.hours=125, DEFAULT_CONFIG:log.retention.hours=168} + log.retention.hours=125 sensitive=false synonyms={STATIC_BROKER_CONFIG:log.retention.hours=125, DEFAULT_CONFIG:log.retention.hours=168} +``` + +As we can see from the configuration of ready kafka, the value of `log.retention.hours` has been changed from `100` to `125`. So the reconfiguration of the cluster is successful. + + +### Reconfigure using apply config + +Now we will reconfigure this cluster again to set `log.retention.hours` to `150`. This time we won't use a new secret. We will use the `applyConfig` field of the `KafkaOpsRequest`. This will merge the new config in the existing secret. + +#### Create KafkaOpsRequest + +Now, we will use the new configuration in the `applyConfig` field in the `KafkaOpsRequest` CR. The `KafkaOpsRequest` yaml is given below, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-reconfigure-apply-topology + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: kafka-prod + configuration: + applyConfig: + broker.properties: |- + log.retention.hours=150 + controller.properties: |- + controller.quorum.election.timeout.ms=4000 + controller.quorum.fetch.timeout.ms=5000 + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are reconfiguring `kafka-prod` cluster. +- `spec.type` specifies that we are performing `Reconfigure` on kafka. +- `spec.configuration.applyConfig` specifies the new configuration that will be merged in the existing secret. + +Let's create the `KafkaOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/reconfigure/kafka-reconfigure-apply-topology.yaml +kafkaopsrequest.ops.kubedb.com/kfops-reconfigure-apply-topology created +``` + +#### Verify the new configuration is working + +If everything goes well, `KubeDB` Ops-manager operator will merge this new config with the existing configuration. + +Let's wait for `KafkaOpsRequest` to be `Successful`. Run the following command to watch `KafkaOpsRequest` CR, + +```bash +$ kubectl get kafkaopsrequests -n demo kfops-reconfigure-apply-topology +NAME TYPE STATUS AGE +kfops-reconfigure-apply-topology Reconfigure Successful 55s +``` + +We can see from the above output that the `KafkaOpsRequest` has succeeded. If we describe the `KafkaOpsRequest` we will get an overview of the steps that were followed to reconfigure the cluster. + +```bash +$ kubectl describe kafkaopsrequest -n demo kfops-reconfigure-apply-topology +Name: kfops-reconfigure-apply-topology +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: KafkaOpsRequest +Metadata: + Creation Timestamp: 2024-08-02T05:14:42Z + Generation: 1 + Resource Version: 332996 + UID: 551d2c92-9431-47a7-a699-8f8115131b49 +Spec: + Apply: IfReady + Configuration: + Apply Config: + broker.properties: log.retention.hours=150 + controller.properties: controller.quorum.election.timeout.ms=4000 +controller.quorum.fetch.timeout.ms=5000 + Database Ref: + Name: kafka-prod + Timeout: 5m + Type: Reconfigure +Status: + Conditions: + Last Transition Time: 2024-08-02T05:14:42Z + Message: Kafka ops-request has started to reconfigure kafka nodes + Observed Generation: 1 + Reason: Reconfigure + Status: True + Type: Reconfigure + Last Transition Time: 2024-08-02T05:14:45Z + Message: Successfully prepared user provided custom config secret + Observed Generation: 1 + Reason: PrepareCustomConfig + Status: True + Type: PrepareCustomConfig + Last Transition Time: 2024-08-02T05:14:52Z + Message: successfully reconciled the Kafka with new configure + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2024-08-02T05:14:57Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-controller-0 + Last Transition Time: 2024-08-02T05:14:57Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-controller-0 + Last Transition Time: 2024-08-02T05:15:07Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-controller-0 + Last Transition Time: 2024-08-02T05:15:12Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-controller-1 + Last Transition Time: 2024-08-02T05:15:12Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-controller-1 + Last Transition Time: 2024-08-02T05:15:27Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-controller-1 + Last Transition Time: 2024-08-02T05:15:32Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-broker-0 + Last Transition Time: 2024-08-02T05:15:32Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-broker-0 + Last Transition Time: 2024-08-02T05:16:07Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-broker-0 + Last Transition Time: 2024-08-02T05:16:12Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-broker-1 + Last Transition Time: 2024-08-02T05:16:12Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-broker-1 + Last Transition Time: 2024-08-02T05:16:27Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-broker-1 + Last Transition Time: 2024-08-02T05:16:32Z + Message: Successfully restarted all nodes + Observed Generation: 1 + Reason: RestartNodes + Status: True + Type: RestartNodes + Last Transition Time: 2024-08-02T05:16:35Z + Message: Successfully completed reconfigure kafka + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 2m6s KubeDB Ops-manager Operator Start processing for KafkaOpsRequest: demo/kfops-reconfigure-apply-topology + Normal Starting 2m6s KubeDB Ops-manager Operator Pausing Kafka databse: demo/kafka-prod + Normal Successful 2m6s KubeDB Ops-manager Operator Successfully paused Kafka database: demo/kafka-prod for KafkaOpsRequest: kfops-reconfigure-apply-topology + Normal UpdatePetSets 116s KubeDB Ops-manager Operator successfully reconciled the Kafka with new configure + Warning get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 111s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 111s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-controller-0 106s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-controller-0 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 101s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 + Warning get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 96s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 96s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-controller-1 91s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-controller-1 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 81s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 + Warning get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 76s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 76s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-broker-0 71s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-broker-0 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 41s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 + Warning get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 36s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 36s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-broker-1 31s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-broker-1 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 21s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 + Normal RestartNodes 15s KubeDB Ops-manager Operator Successfully restarted all nodes + Normal Starting 14s KubeDB Ops-manager Operator Resuming Kafka database: demo/kafka-prod + Normal Successful 14s KubeDB Ops-manager Operator Successfully resumed Kafka database: demo/kafka-prod for KafkaOpsRequest: kfops-reconfigure-apply-topology +``` + +Now let's exec into one of the instance and run a `kafka-configs.sh` command to check the new configuration we have provided. + +```bash +$ $ kubectl exec -it -n demo kafka-prod-broker-0 -- kafka-configs.sh --bootstrap-server localhost:9092 --command-config /opt/kafka/config/clientauth.properties --describe --entity-type brokers --all | grep 'log.retention.hours' + log.retention.hours=150 sensitive=false synonyms={STATIC_BROKER_CONFIG:log.retention.hours=150, DEFAULT_CONFIG:log.retention.hours=168} + log.retention.hours=150 sensitive=false synonyms={STATIC_BROKER_CONFIG:log.retention.hours=150, DEFAULT_CONFIG:log.retention.hours=168} +``` + +As we can see from the configuration of ready kafka, the value of `log.retention.hours` has been changed from `125` to `150`. So the reconfiguration of the database using the `applyConfig` field is successful. + + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete kf -n demo kafka-dev +kubectl delete kafkaopsrequest -n demo kfops-reconfigure-apply-topology kfops-reconfigure-topology +kubectl delete secret -n demo kf-topology-custom-config new-kf-topology-custom-config +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [Kafka object](/docs/guides/kafka/concepts/kafka.md). +- Different Kafka topology clustering modes [here](/docs/guides/kafka/clustering/_index.md). +- Monitor your Kafka database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). + +[//]: # (- Monitor your Kafka database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/kafka/monitoring/using-builtin-prometheus.md).) +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/kafka/reconfigure/overview.md b/docs/guides/kafka/reconfigure/overview.md new file mode 100644 index 0000000000..dc33f41d20 --- /dev/null +++ b/docs/guides/kafka/reconfigure/overview.md @@ -0,0 +1,54 @@ +--- +title: Reconfiguring Kafka +menu: + docs_{{ .version }}: + identifier: kf-reconfigure-overview + name: Overview + parent: kf-reconfigure + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Reconfiguring Kafka + +This guide will give an overview on how KubeDB Ops-manager operator reconfigures `Kafka` components such as Combined, Broker, Controller, etc. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [Kafka](/docs/guides/kafka/concepts/kafka.md) + - [KafkaOpsRequest](/docs/guides/kafka/concepts/kafkaopsrequest.md) + +## How Reconfiguring Kafka Process Works + +The following diagram shows how KubeDB Ops-manager operator reconfigures `Kafka` components. Open the image in a new tab to see the enlarged version. + +
+  Reconfiguring process of Kafka +
Fig: Reconfiguring process of Kafka
+
+ +The Reconfiguring Kafka process consists of the following steps: + +1. At first, a user creates a `Kafka` Custom Resource (CR). + +2. `KubeDB` Provisioner operator watches the `Kafka` CR. + +3. When the operator finds a `Kafka` CR, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to reconfigure the various components (ie. Combined, Broker) of the `Kafka`, the user creates a `KafkaOpsRequest` CR with desired information. + +5. `KubeDB` Ops-manager operator watches the `KafkaOpsRequest` CR. + +6. When it finds a `KafkaOpsRequest` CR, it halts the `Kafka` object which is referred from the `KafkaOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `Kafka` object during the reconfiguring process. + +7. Then the `KubeDB` Ops-manager operator will replace the existing configuration with the new configuration provided or merge the new configuration with the existing configuration according to the `MogoDBOpsRequest` CR. + +8. Then the `KubeDB` Ops-manager operator will restart the related PetSet Pods so that they restart with the new configuration defined in the `KafkaOpsRequest` CR. + +9. After the successful reconfiguring of the `Kafka` components, the `KubeDB` Ops-manager operator resumes the `Kafka` object so that the `KubeDB` Provisioner operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on reconfiguring Kafka components using `KafkaOpsRequest` CRD. \ No newline at end of file diff --git a/docs/guides/kafka/restart/_index.md b/docs/guides/kafka/restart/_index.md new file mode 100644 index 0000000000..d0d4240b4d --- /dev/null +++ b/docs/guides/kafka/restart/_index.md @@ -0,0 +1,10 @@ +--- +title: Restart Kafka +menu: + docs_{{ .version }}: + identifier: kf-restart + name: Restart + parent: kf-kafka-guides + weight: 46 +menu_name: docs_{{ .version }} +--- diff --git a/docs/guides/kafka/restart/restart.md b/docs/guides/kafka/restart/restart.md new file mode 100644 index 0000000000..304dd8aaa1 --- /dev/null +++ b/docs/guides/kafka/restart/restart.md @@ -0,0 +1,252 @@ +--- +title: Restart Kafka +menu: + docs_{{ .version }}: + identifier: kf-restart-details + name: Restart Kafka + parent: kf-restart + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Restart Kafka + +KubeDB supports restarting the Kafka database via a KafkaOpsRequest. Restarting is useful if some pods are got stuck in some phase, or they are not working correctly. This tutorial will show you how to use that. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/kafka](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/kafka) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy Kafka + +In this section, we are going to deploy a Kafka database using KubeDB. + +```yaml +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-prod + namespace: demo +spec: + version: 3.6.1 + topology: + broker: + replicas: 2 + podTemplate: + spec: + containers: + - name: kafka + resources: + requests: + cpu: "500m" + memory: "1Gi" + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + controller: + replicas: 2 + podTemplate: + spec: + containers: + - name: kafka + resources: + requests: + cpu: "500m" + memory: "1Gi" + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: DoNotTerminate +``` + +- `spec.topology` represents the specification for kafka topology. + - `broker` denotes the broker node of kafka topology. + - `controller` denotes the controller node of kafka topology. + +Let's create the `Kafka` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/restart/kafka.yaml +kafka.kubedb.com/kafka-prod created +``` + +## Apply Restart opsRequest + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: restart + namespace: demo +spec: + type: Restart + databaseRef: + name: kafka-prod + timeout: 5m + apply: Always +``` + +- `spec.type` specifies the Type of the ops Request +- `spec.databaseRef` holds the name of the Kafka CR. It should be available in the same namespace as the opsRequest +- The meaning of `spec.timeout` & `spec.apply` fields will be found [here](/docs/guides/kafka/concepts/kafkaopsrequest.md#spectimeout) + +> Note: The method of restarting the combined node is exactly same as above. All you need, is to specify the corresponding Kafka name in `spec.databaseRef.name` section. + +Let's create the `KafkaOpsRequest` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/restart/ops.yaml +kafkaopsrequest.ops.kubedb.com/restart created +``` + +Now the Ops-manager operator will first restart the controller pods, then broker of the referenced kafka. + +```shell +$ kubectl get kfops -n demo +NAME TYPE STATUS AGE +restart Restart Successful 119s + +$ kubectl get kfops -n demo restart -oyaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"ops.kubedb.com/v1alpha1","kind":"KafkaOpsRequest","metadata":{"annotations":{},"name":"restart","namespace":"demo"},"spec":{"apply":"Always","databaseRef":{"name":"kafka-prod"},"timeout":"3m","type":"Restart"}} + creationTimestamp: "2024-07-26T10:12:10Z" + generation: 1 + name: restart + namespace: demo + resourceVersion: "24434" + uid: 956a374e-1d6f-4f68-828f-cfed4410b175 +spec: + apply: Always + databaseRef: + name: kafka-prod + timeout: 3m + type: Restart +status: + conditions: + - lastTransitionTime: "2024-07-26T10:12:10Z" + message: Kafka ops-request has started to restart kafka nodes + observedGeneration: 1 + reason: Restart + status: "True" + type: Restart + - lastTransitionTime: "2024-07-26T10:12:18Z" + message: get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + observedGeneration: 1 + status: "True" + type: GetPod--kafka-prod-controller-0 + - lastTransitionTime: "2024-07-26T10:12:18Z" + message: evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + observedGeneration: 1 + status: "True" + type: EvictPod--kafka-prod-controller-0 + - lastTransitionTime: "2024-07-26T10:12:23Z" + message: check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 + observedGeneration: 1 + status: "True" + type: CheckPodRunning--kafka-prod-controller-0 + - lastTransitionTime: "2024-07-26T10:12:28Z" + message: get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + observedGeneration: 1 + status: "True" + type: GetPod--kafka-prod-controller-1 + - lastTransitionTime: "2024-07-26T10:12:28Z" + message: evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + observedGeneration: 1 + status: "True" + type: EvictPod--kafka-prod-controller-1 + - lastTransitionTime: "2024-07-26T10:12:38Z" + message: check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 + observedGeneration: 1 + status: "True" + type: CheckPodRunning--kafka-prod-controller-1 + - lastTransitionTime: "2024-07-26T10:12:43Z" + message: get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + observedGeneration: 1 + status: "True" + type: GetPod--kafka-prod-broker-0 + - lastTransitionTime: "2024-07-26T10:12:43Z" + message: evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + observedGeneration: 1 + status: "True" + type: EvictPod--kafka-prod-broker-0 + - lastTransitionTime: "2024-07-26T10:13:18Z" + message: check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 + observedGeneration: 1 + status: "True" + type: CheckPodRunning--kafka-prod-broker-0 + - lastTransitionTime: "2024-07-26T10:13:23Z" + message: get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + observedGeneration: 1 + status: "True" + type: GetPod--kafka-prod-broker-1 + - lastTransitionTime: "2024-07-26T10:13:23Z" + message: evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + observedGeneration: 1 + status: "True" + type: EvictPod--kafka-prod-broker-1 + - lastTransitionTime: "2024-07-26T10:13:28Z" + message: check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 + observedGeneration: 1 + status: "True" + type: CheckPodRunning--kafka-prod-broker-1 + - lastTransitionTime: "2024-07-26T10:13:33Z" + message: Successfully Restarted Kafka nodes + observedGeneration: 1 + reason: RestartNodes + status: "True" + type: RestartNodes + - lastTransitionTime: "2024-07-26T10:13:33Z" + message: Controller has successfully restart the Kafka replicas + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete kafkaopsrequest -n demo restart +kubectl delete kafka -n demo kafka-prod +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [Kafka object](/docs/guides/kafka/concepts/kafka.md). +- Different Kafka topology clustering modes [here](/docs/guides/kafka/clustering/_index.md). +- Monitor your Kafka database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). + +[//]: # (- Monitor your Kafka database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/kafka/monitoring/using-builtin-prometheus.md).) +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/kafka/restproxy/_index.md b/docs/guides/kafka/restproxy/_index.md new file mode 100644 index 0000000000..b02df6b52b --- /dev/null +++ b/docs/guides/kafka/restproxy/_index.md @@ -0,0 +1,10 @@ +--- +title: Rest Proxy +menu: + docs_{{ .version }}: + identifier: kf-rest-proxy-guides + name: RestProxy + parent: kf-kafka-guides + weight: 25 +menu_name: docs_{{ .version }} +--- diff --git a/docs/guides/kafka/restproxy/overview.md b/docs/guides/kafka/restproxy/overview.md new file mode 100644 index 0000000000..7c54381889 --- /dev/null +++ b/docs/guides/kafka/restproxy/overview.md @@ -0,0 +1,408 @@ +--- +title: Rest Proxy Overview +menu: + docs_{{ .version }}: + identifier: kf-rest-proxy-guides-overview + name: Overview + parent: kf-rest-proxy-guides + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# RestProxy QuickStart + +This tutorial will show you how to use KubeDB to run a [Rest Proxy](https://www.karapace.io/quickstart). + +

+  lifecycle +

+ +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install the KubeDB operator in your cluster following the steps [here](/docs/setup/install/_index.md). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create namespace demo +namespace/demo created + +$ kubectl get namespace +NAME STATUS AGE +demo Active 9s +``` + +> Note: YAML files used in this tutorial are stored in [examples/kafka/restproxy/](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/kafka/restproxy) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +> We have designed this tutorial to demonstrate a production setup of KubeDB managed Schema Registry. If you just want to try out KubeDB, you can bypass some safety features following the tips [here](/docs/guides/kafka/restproxy/overview.md#tips-for-testing). + +## Find Available RestProxy Versions + +When you install the KubeDB operator, it registers a CRD named [SchemaRegistryVersion](/docs/guides/kafka/concepts/schemaregistryversion.md). RestProxy uses SchemaRegistryVersions which distribution is `Aiven` to create a RestProxy instance. The installation process comes with a set of tested SchemaRegistryVersion objects. Let's check available SchemaRegistryVersions by, + +```bash +$ kubectl get ksrversion + +NAME VERSION DB_IMAGE DEPRECATED AGE +NAME VERSION DISTRIBUTION REGISTRY_IMAGE DEPRECATED AGE +2.5.11.final 2.5.11 Apicurio apicurio/apicurio-registry-kafkasql:2.5.11.Final 3d +3.15.0 3.15.0 Aiven ghcr.io/aiven-open/karapace:3.15.0 3d +``` + +> **Note**: Currently RestProxy is supported only for Aiven distribution. Use version with distribution `Aiven` to create Kafka Rest Proxy. + +Notice the `DEPRECATED` column. Here, `true` means that this SchemaRegistryVersion is deprecated for the current KubeDB version. KubeDB will not work for deprecated KafkaVersion. You can also use the short from `ksrversion` to check available SchemaRegistryVersion. + +In this tutorial, we will use `3.15.0` SchemaRegistryVersion CR to create a Kafka Rest Proxy. + +## Create a Kafka RestProxy + +The KubeDB operator implements a RestProxy CRD to define the specification of SchemaRegistry. + +The RestProxy instance used for this tutorial: + +```yaml +apiVersion: kafka.kubedb.com/v1alpha1 +kind: RestProxy +metadata: + name: restproxy-quickstart + namespace: demo +spec: + version: 3.15.0 + replicas: 2 + kafkaRef: + name: kafka-quickstart + namespace: demo + deletionPolicy: WipeOut +``` + +Here, + +- `spec.version` - is the name of the SchemaRegistryVersion CR. Here, a SchemaRegistry of version `3.15.0` will be created. +- `spec.replicas` - specifies the number of rest proxy instances to run. Here, the RestProxy will run with 2 replicas. +- `spec.kafkaRef` specifies the Kafka instance that the RestProxy will connect to. Here, the RestProxy will connect to the Kafka instance named `kafka-quickstart` in the `demo` namespace. It is an appbinding reference of the Kafka instance. +- `spec.deletionPolicy` specifies what KubeDB should do when a user try to delete RestProxy CR. Deletion policy `WipeOut` will delete all the instances, secret when the RestProxy CR is deleted. + +Before create RestProxy, you have to deploy a `Kafka` cluster first. To deploy kafka cluster, follow the [Kafka Quickstart](/docs/guides/kafka/quickstart/kafka/index.md) guide. Let's assume `kafka-quickstart` is already deployed using KubeDB. +Let's create the RestProxy CR that is shown above: + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/restproxy/restproxy-quickstart.yaml +restproxy.kafka.kubedb.com/restproxy-quickstart created +``` + +The RestProxy's `STATUS` will go from `Provisioning` to `Ready` state within few minutes. Once the `STATUS` is `Ready`, you are ready to use the RestProxy. + +```bash +$ kubectl get restproxy -n demo -w +NAME TYPE VERSION STATUS AGE +restproxy-quickstart kafka.kubedb.com/v1alpha1 3.6.1 Provisioning 2s +restproxy-quickstart kafka.kubedb.com/v1alpha1 3.6.1 Provisioning 4s +. +. +restproxy-quickstart kafka.kubedb.com/v1alpha1 3.6.1 Ready 112s +``` + +Describe the `RestProxy` object to observe the progress if something goes wrong or the status is not changing for a long period of time: + +```bash +$ kubectl describe restproxy -n demo restproxy-quickstart +Name: restproxy-quickstart +Namespace: demo +Labels: +Annotations: +API Version: kafka.kubedb.com/v1alpha1 +Kind: RestProxy +Metadata: + Creation Timestamp: 2024-09-02T06:27:36Z + Finalizers: + kafka.kubedb.com/finalizer + Generation: 1 + Resource Version: 179508 + UID: 5defcf67-015d-4f15-a8ef-661717258f76 +Spec: + Deletion Policy: WipeOut + Health Checker: + Failure Threshold: 3 + Period Seconds: 10 + Timeout Seconds: 10 + Kafka Ref: + Name: kafka-quickstart + Namespace: demo + Pod Template: + Controller: + Metadata: + Spec: + Containers: + Name: rest-proxy + Resources: + Limits: + Memory: 1Gi + Requests: + Cpu: 500m + Memory: 1Gi + Security Context: + Allow Privilege Escalation: false + Capabilities: + Drop: + ALL + Run As Non Root: true + Run As User: 1001 + Seccomp Profile: + Type: RuntimeDefault + Pod Placement Policy: + Name: default + Security Context: + Fs Group: 1001 + Replicas: 2 + Version: 3.15.0 +Status: + Conditions: + Last Transition Time: 2024-09-02T06:27:36Z + Message: The KubeDB operator has started the provisioning of RestProxy: demo/restproxy-quickstart + Observed Generation: 1 + Reason: RestProxyProvisioningStartedSuccessfully + Status: True + Type: ProvisioningStarted + Last Transition Time: 2024-09-02T06:28:17Z + Message: All desired replicas are ready. + Observed Generation: 1 + Reason: AllReplicasReady + Status: True + Type: ReplicaReady + Last Transition Time: 2024-09-02T06:28:29Z + Message: The RestProxy: demo/restproxy-quickstart is accepting client requests + Observed Generation: 1 + Reason: DatabaseAcceptingConnectionRequest + Status: True + Type: AcceptingConnection + Last Transition Time: 2024-09-02T06:28:29Z + Message: The RestProxy: demo/restproxy-quickstart is ready. + Observed Generation: 1 + Reason: ReadinessCheckSucceeded + Status: True + Type: Ready + Last Transition Time: 2024-09-02T06:28:30Z + Message: The RestProxy: demo/restproxy-quickstart is successfully provisioned. + Observed Generation: 1 + Reason: DatabaseSuccessfullyProvisioned + Status: True + Type: Provisioned + Phase: Ready +Events: +``` + +### KubeDB Operator Generated Resources + +On deployment of a RestProxy CR, the operator creates the following resources: + +```bash +$ kubectl get all,secret,petset -n demo -l 'app.kubernetes.io/instance=restproxy-quickstart' +NAME READY STATUS RESTARTS AGE +pod/restproxy-quickstart-0 1/1 Running 0 117s +pod/restproxy-quickstart-1 1/1 Running 0 79s + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/restproxy-quickstart ClusterIP 10.96.117.46 8082/TCP 119s +service/restproxy-quickstart-pods ClusterIP None 8082/TCP 119s + +NAME TYPE DATA AGE +secret/restproxy-quickstart-config Opaque 1 119s + +NAME AGE +petset.apps.k8s.appscode.com/restproxy-quickstart 117s +``` + +- `PetSet` - a PetSet named after the RestProxy instance. +- `Services` - For a RestProxy instance headless service is created with name `{RestProxy-name}-{pods}` and a primary service created with name `{RestProxy-name}`. +- `Secrets` - default configuration secrets are generated for RestProxy. + - `{RestProxy-Name}-config` - the default configuration secret created by the operator. + +### Accessing Kafka using Rest Proxy + +You can access `Kafka` using the REST API. The RestProxy REST API is available at port `8082` of the Rest Proxy service. + +To access the RestProxy REST API, you can use `kubectl port-forward` command to forward the port to your local machine. + +```bash +$ kubectl port-forward svc/restproxy-quickstart 8082:8082 -n demo +Forwarding from 127.0.0.1:8082 -> 8082 +Forwarding from [::1]:8082 -> 8082 +``` + +In another terminal, you can use `curl` to list topics, produce and consume messages from the Kafka cluster. + +List topics: + +```bash +$ curl localhost:8082/topics | jq +[ + "order_notification", + "kafka-health", + "__consumer_offsets", + "kafkasql-journal" +] + +``` + +#### Produce a message to a topic `order_notification`(replace `order_notification` with your topic name): + +> Note: The topic must be created in the Kafka cluster before producing messages. + +```bash +curl -X POST http://localhost:8082/topics/order_notification \ + -H "Content-Type: application/vnd.kafka.json.v2+json" \ + -d '{ + "records": [ + {"value": {"orderId": "12345", "status": "Order Placed", "customerName": "Alice Johnson", "totalAmount": 150.75, "timestamp": "2024-08-30T12:34:56Z"}}, + {"value": {"orderId": "12346", "status": "Shipped", "customerName": "Bob Smith", "totalAmount": 249.99, "timestamp": "2024-08-30T12:45:12Z"}}, + {"value": {"orderId": "12347", "status": "Delivered", "customerName": "Charlie Brown", "totalAmount": 89.50, "timestamp": "2024-08-30T13:00:22Z"}} + ] + }' | jq + +{ + "key_schema_id": null, + "offsets": [ + { + "offset": 0, + "partition": 0 + }, + { + "offset": 1, + "partition": 0 + }, + { + "offset": 2, + "partition": 0 + } + ], + "value_schema_id": null +} +``` +#### Consume messages from a topic `order_notification`(replace `order_notification` with your topic name): + +To consume messages from a Kafka topic using the Kafka REST Proxy, you'll need to perform the following steps: + +Create a Consumer Instance + +```bash +$ curl -X POST http://localhost:8082/consumers/order_consumer \ + -H "Content-Type: application/vnd.kafka.v2+json" \ + -d '{ + "name": "order_consumer_instance", + "format": "json", + "auto.offset.reset": "earliest" + }' | jq + +{ + "base_uri": "http://restproxy-quickstart-0:8082/consumers/order_consumer/instances/order_consumer_instance", + "instance_id": "order_consumer_instance" +} +``` + +Subscribe the Consumer to a Topic + +```bash +$ curl -X POST http://localhost:8082/consumers/order_consumer/instances/order_consumer_instance/subscription \ + -H "Content-Type: application/vnd.kafka.v2+json" \ + -d '{ + "topics": ["order_notification"] + }' +``` + +Consume Messages + +```bash +$ curl -X GET http://localhost:8082/consumers/order_consumer/instances/order_consumer_instance/records \ + -H "Accept: application/vnd.kafka.json.v2+json" | jq + +[ + { + "key": null, + "offset": 0, + "partition": 0, + "timestamp": 1725259256610, + "topic": "order_notification", + "value": { + "customerName": "Alice Johnson", + "orderId": "12345", + "status": "Order Placed", + "timestamp": "2024-08-30T12:34:56Z", + "totalAmount": 150.75 + } + }, + { + "key": null, + "offset": 1, + "partition": 0, + "timestamp": 1725259256610, + "topic": "order_notification", + "value": { + "customerName": "Bob Smith", + "orderId": "12346", + "status": "Shipped", + "timestamp": "2024-08-30T12:45:12Z", + "totalAmount": 249.99 + } + }, + { + "key": null, + "offset": 2, + "partition": 0, + "timestamp": 1725259256610, + "topic": "order_notification", + "value": { + "customerName": "Charlie Brown", + "orderId": "12347", + "status": "Delivered", + "timestamp": "2024-08-30T13:00:22Z", + "totalAmount": 89.5 + } + } +] +``` + +Delete the Consumer Instance + +```bash +$ curl -X DELETE http://localhost:8082/consumers/order_consumer/instances/order_consumer_instance +``` + +You can also list brokers, describe topics and more using the Kafka RestProxy. + +## Cleaning up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl patch -n demo restproxy restproxy-quickstart -p '{"spec":{"deletionPolicy":"WipeOut"}}' --type="merge" +restproxy.kafka.kubedb.com/restproxy-quickstart patched + +$ kubectl delete krp restproxy-quickstart -n demo +restproxy.kafka.kubedb.com "restproxy-quickstart" deleted + +$ kubectl delete kafka kafka-quickstart -n demo +kafka.kubedb.com "kafka-quickstart" deleted + +$ kubectl delete namespace demo +namespace "demo" deleted +``` + +## Tips for Testing + +If you are just testing some basic functionalities, you might want to avoid additional hassles due to some safety features that are great for the production environment. You can follow these tips to avoid them. + +1 **Use `deletionPolicy: Delete`**. It is nice to be able to resume the cluster from the previous one. So, we preserve auth `Secrets`. If you don't want to resume the cluster, you can just use `spec.deletionPolicy: WipeOut`. It will clean up every resource that was created with the SchemaRegistry CR. For more details, please visit [here](/docs/guides/kafka/concepts/schemaregistry.md#specdeletionpolicy). + +## Next Steps + +- [Quickstart Kafka](/docs/guides/kafka/quickstart/kafka/index.md) with KubeDB Operator. +- [Quickstart ConnectCluster](/docs/guides/kafka/connectcluster/overview.md) with KubeDB Operator. +- Use [kubedb cli](/docs/guides/kafka/cli/cli.md) to manage databases like kubectl for Kubernetes. +- Detail concepts of [ConnectCluster object](/docs/guides/kafka/concepts/connectcluster.md). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/kafka/scaling/_index.md b/docs/guides/kafka/scaling/_index.md new file mode 100644 index 0000000000..98b83c7106 --- /dev/null +++ b/docs/guides/kafka/scaling/_index.md @@ -0,0 +1,10 @@ +--- +title: Scaling Kafka +menu: + docs_{{ .version }}: + identifier: kf-scaling + name: Scaling + parent: kf-kafka-guides + weight: 43 +menu_name: docs_{{ .version }} +--- \ No newline at end of file diff --git a/docs/guides/kafka/scaling/horizontal-scaling/_index.md b/docs/guides/kafka/scaling/horizontal-scaling/_index.md new file mode 100644 index 0000000000..30adbd72dd --- /dev/null +++ b/docs/guides/kafka/scaling/horizontal-scaling/_index.md @@ -0,0 +1,10 @@ +--- +title: Horizontal Scaling +menu: + docs_{{ .version }}: + identifier: kf-horizontal-scaling + name: Horizontal Scaling + parent: kf-scaling + weight: 10 +menu_name: docs_{{ .version }} +--- \ No newline at end of file diff --git a/docs/guides/kafka/scaling/horizontal-scaling/combined.md b/docs/guides/kafka/scaling/horizontal-scaling/combined.md new file mode 100644 index 0000000000..4ded4cffb7 --- /dev/null +++ b/docs/guides/kafka/scaling/horizontal-scaling/combined.md @@ -0,0 +1,969 @@ +--- +title: Horizontal Scaling Combined Kafka +menu: + docs_{{ .version }}: + identifier: kf-horizontal-scaling-combined + name: Combined Cluster + parent: kf-horizontal-scaling + weight: 20 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Horizontal Scale Kafka Combined Cluster + +This guide will show you how to use `KubeDB` Ops-manager operator to scale the Kafka combined cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [Kafka](/docs/guides/kafka/concepts/kafka.md) + - [Combined](/docs/guides/kafka/clustering/combined-cluster/index.md) + - [KafkaOpsRequest](/docs/guides/kafka/concepts/kafkaopsrequest.md) + - [Horizontal Scaling Overview](/docs/guides/kafka/scaling/horizontal-scaling/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/kafka](/docs/examples/kafka) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Apply Horizontal Scaling on Combined Cluster + +Here, we are going to deploy a `Kafka` combined cluster using a supported version by `KubeDB` operator. Then we are going to apply horizontal scaling on it. + +### Prepare Kafka Combined cluster + +Now, we are going to deploy a `Kafka` combined cluster with version `3.6.1`. + +### Deploy Kafka combined cluster + +In this section, we are going to deploy a Kafka combined cluster. Then, in the next section we will scale the cluster using `KafkaOpsRequest` CRD. Below is the YAML of the `Kafka` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-dev + namespace: demo +spec: + replicas: 2 + version: 3.6.1 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut +``` + +Let's create the `Kafka` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/scaling/kafka-combined.yaml +kafka.kubedb.com/kafka-dev created +``` + +Now, wait until `kafka-dev` has status `Ready`. i.e, + +```bash +$ kubectl get kf -n demo -w +NAME TYPE VERSION STATUS AGE +kafka-dev kubedb.com/v1 3.6.1 Provisioning 0s +kafka-dev kubedb.com/v1 3.6.1 Provisioning 24s +. +. +kafka-dev kubedb.com/v1 3.6.1 Ready 92s +``` + +Let's check the number of replicas has from kafka object, number of pods the petset have, + +```bash +$ kubectl get kafka -n demo kafka-dev -o json | jq '.spec.replicas' +2 + +$ kubectl get petset -n demo kafka-dev -o json | jq '.spec.replicas' +2 +``` + +We can see from both command that the cluster has 2 replicas. + +Also, we can verify the replicas of the combined from an internal kafka command by exec into a replica. + +Now let's exec to a instance and run a kafka internal command to check the number of replicas, + +```bash +$ kubectl exec -it -n demo kafka-dev-0 -- kafka-broker-api-versions.sh --bootstrap-server localhost:9092 --command-config config/clientauth.properties +kafka-dev-0.kafka-dev-pods.demo.svc.cluster.local:9092 (id: 0 rack: null) -> ( + Produce(0): 0 to 9 [usable: 9], + Fetch(1): 0 to 15 [usable: 15], + ListOffsets(2): 0 to 8 [usable: 8], + Metadata(3): 0 to 12 [usable: 12], + LeaderAndIsr(4): UNSUPPORTED, + StopReplica(5): UNSUPPORTED, + UpdateMetadata(6): UNSUPPORTED, + ControlledShutdown(7): UNSUPPORTED, + OffsetCommit(8): 0 to 8 [usable: 8], + OffsetFetch(9): 0 to 8 [usable: 8], + FindCoordinator(10): 0 to 4 [usable: 4], + JoinGroup(11): 0 to 9 [usable: 9], + Heartbeat(12): 0 to 4 [usable: 4], + LeaveGroup(13): 0 to 5 [usable: 5], + SyncGroup(14): 0 to 5 [usable: 5], + DescribeGroups(15): 0 to 5 [usable: 5], + ListGroups(16): 0 to 4 [usable: 4], + SaslHandshake(17): 0 to 1 [usable: 1], + ApiVersions(18): 0 to 3 [usable: 3], + CreateTopics(19): 0 to 7 [usable: 7], + DeleteTopics(20): 0 to 6 [usable: 6], + DeleteRecords(21): 0 to 2 [usable: 2], + InitProducerId(22): 0 to 4 [usable: 4], + OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], + AddPartitionsToTxn(24): 0 to 4 [usable: 4], + AddOffsetsToTxn(25): 0 to 3 [usable: 3], + EndTxn(26): 0 to 3 [usable: 3], + WriteTxnMarkers(27): 0 to 1 [usable: 1], + TxnOffsetCommit(28): 0 to 3 [usable: 3], + DescribeAcls(29): 0 to 3 [usable: 3], + CreateAcls(30): 0 to 3 [usable: 3], + DeleteAcls(31): 0 to 3 [usable: 3], + DescribeConfigs(32): 0 to 4 [usable: 4], + AlterConfigs(33): 0 to 2 [usable: 2], + AlterReplicaLogDirs(34): 0 to 2 [usable: 2], + DescribeLogDirs(35): 0 to 4 [usable: 4], + SaslAuthenticate(36): 0 to 2 [usable: 2], + CreatePartitions(37): 0 to 3 [usable: 3], + CreateDelegationToken(38): 0 to 3 [usable: 3], + RenewDelegationToken(39): 0 to 2 [usable: 2], + ExpireDelegationToken(40): 0 to 2 [usable: 2], + DescribeDelegationToken(41): 0 to 3 [usable: 3], + DeleteGroups(42): 0 to 2 [usable: 2], + ElectLeaders(43): 0 to 2 [usable: 2], + IncrementalAlterConfigs(44): 0 to 1 [usable: 1], + AlterPartitionReassignments(45): 0 [usable: 0], + ListPartitionReassignments(46): 0 [usable: 0], + OffsetDelete(47): 0 [usable: 0], + DescribeClientQuotas(48): 0 to 1 [usable: 1], + AlterClientQuotas(49): 0 to 1 [usable: 1], + DescribeUserScramCredentials(50): 0 [usable: 0], + AlterUserScramCredentials(51): 0 [usable: 0], + DescribeQuorum(55): 0 to 1 [usable: 1], + AlterPartition(56): UNSUPPORTED, + UpdateFeatures(57): 0 to 1 [usable: 1], + Envelope(58): UNSUPPORTED, + DescribeCluster(60): 0 [usable: 0], + DescribeProducers(61): 0 [usable: 0], + UnregisterBroker(64): 0 [usable: 0], + DescribeTransactions(65): 0 [usable: 0], + ListTransactions(66): 0 [usable: 0], + AllocateProducerIds(67): UNSUPPORTED, + ConsumerGroupHeartbeat(68): UNSUPPORTED +) +kafka-dev-1.kafka-dev-pods.demo.svc.cluster.local:9092 (id: 1 rack: null) -> ( + Produce(0): 0 to 9 [usable: 9], + Fetch(1): 0 to 15 [usable: 15], + ListOffsets(2): 0 to 8 [usable: 8], + Metadata(3): 0 to 12 [usable: 12], + LeaderAndIsr(4): UNSUPPORTED, + StopReplica(5): UNSUPPORTED, + UpdateMetadata(6): UNSUPPORTED, + ControlledShutdown(7): UNSUPPORTED, + OffsetCommit(8): 0 to 8 [usable: 8], + OffsetFetch(9): 0 to 8 [usable: 8], + FindCoordinator(10): 0 to 4 [usable: 4], + JoinGroup(11): 0 to 9 [usable: 9], + Heartbeat(12): 0 to 4 [usable: 4], + LeaveGroup(13): 0 to 5 [usable: 5], + SyncGroup(14): 0 to 5 [usable: 5], + DescribeGroups(15): 0 to 5 [usable: 5], + ListGroups(16): 0 to 4 [usable: 4], + SaslHandshake(17): 0 to 1 [usable: 1], + ApiVersions(18): 0 to 3 [usable: 3], + CreateTopics(19): 0 to 7 [usable: 7], + DeleteTopics(20): 0 to 6 [usable: 6], + DeleteRecords(21): 0 to 2 [usable: 2], + InitProducerId(22): 0 to 4 [usable: 4], + OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], + AddPartitionsToTxn(24): 0 to 4 [usable: 4], + AddOffsetsToTxn(25): 0 to 3 [usable: 3], + EndTxn(26): 0 to 3 [usable: 3], + WriteTxnMarkers(27): 0 to 1 [usable: 1], + TxnOffsetCommit(28): 0 to 3 [usable: 3], + DescribeAcls(29): 0 to 3 [usable: 3], + CreateAcls(30): 0 to 3 [usable: 3], + DeleteAcls(31): 0 to 3 [usable: 3], + DescribeConfigs(32): 0 to 4 [usable: 4], + AlterConfigs(33): 0 to 2 [usable: 2], + AlterReplicaLogDirs(34): 0 to 2 [usable: 2], + DescribeLogDirs(35): 0 to 4 [usable: 4], + SaslAuthenticate(36): 0 to 2 [usable: 2], + CreatePartitions(37): 0 to 3 [usable: 3], + CreateDelegationToken(38): 0 to 3 [usable: 3], + RenewDelegationToken(39): 0 to 2 [usable: 2], + ExpireDelegationToken(40): 0 to 2 [usable: 2], + DescribeDelegationToken(41): 0 to 3 [usable: 3], + DeleteGroups(42): 0 to 2 [usable: 2], + ElectLeaders(43): 0 to 2 [usable: 2], + IncrementalAlterConfigs(44): 0 to 1 [usable: 1], + AlterPartitionReassignments(45): 0 [usable: 0], + ListPartitionReassignments(46): 0 [usable: 0], + OffsetDelete(47): 0 [usable: 0], + DescribeClientQuotas(48): 0 to 1 [usable: 1], + AlterClientQuotas(49): 0 to 1 [usable: 1], + DescribeUserScramCredentials(50): 0 [usable: 0], + AlterUserScramCredentials(51): 0 [usable: 0], + DescribeQuorum(55): 0 to 1 [usable: 1], + AlterPartition(56): UNSUPPORTED, + UpdateFeatures(57): 0 to 1 [usable: 1], + Envelope(58): UNSUPPORTED, + DescribeCluster(60): 0 [usable: 0], + DescribeProducers(61): 0 [usable: 0], + UnregisterBroker(64): 0 [usable: 0], + DescribeTransactions(65): 0 [usable: 0], + ListTransactions(66): 0 [usable: 0], + AllocateProducerIds(67): UNSUPPORTED, + ConsumerGroupHeartbeat(68): UNSUPPORTED +) +``` + +We can see from the above output that the kafka has 2 nodes. + +We are now ready to apply the `KafkaOpsRequest` CR to scale this cluster. + +## Scale Up Replicas + +Here, we are going to scale up the replicas of the combined cluster to meet the desired number of replicas after scaling. + +#### Create KafkaOpsRequest + +In order to scale up the replicas of the combined cluster, we have to create a `KafkaOpsRequest` CR with our desired replicas. Below is the YAML of the `KafkaOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-hscale-up-combined + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: kafka-dev + horizontalScaling: + node: 3 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing horizontal scaling operation on `kafka-dev` cluster. +- `spec.type` specifies that we are performing `HorizontalScaling` on kafka. +- `spec.horizontalScaling.node` specifies the desired replicas after scaling. + +Let's create the `KafkaOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/scaling/horizontal-scaling/kafka-hscale-up-combined.yaml +kafkaopsrequest.ops.kubedb.com/kfops-hscale-up-combined created +``` + +#### Verify Combined cluster replicas scaled up successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the replicas of `Kafka` object and related `PetSets` and `Pods`. + +Let's wait for `KafkaOpsRequest` to be `Successful`. Run the following command to watch `KafkaOpsRequest` CR, + +```bash +$ watch kubectl get kafkaopsrequest -n demo +NAME TYPE STATUS AGE +kfops-hscale-up-combined HorizontalScaling Successful 106s +``` + +We can see from the above output that the `KafkaOpsRequest` has succeeded. If we describe the `KafkaOpsRequest` we will get an overview of the steps that were followed to scale the cluster. + +```bash +$ kubectl describe kafkaopsrequests -n demo kfops-hscale-up-combined +Name: kfops-hscale-up-combined +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: KafkaOpsRequest +Metadata: + Creation Timestamp: 2024-08-02T10:19:56Z + Generation: 1 + Resource Version: 353093 + UID: f91de2da-82c4-4175-aab4-de0f3e1ce498 +Spec: + Apply: IfReady + Database Ref: + Name: kafka-dev + Horizontal Scaling: + Node: 3 + Type: HorizontalScaling +Status: + Conditions: + Last Transition Time: 2024-08-02T10:19:57Z + Message: Kafka ops-request has started to horizontally scaling the nodes + Observed Generation: 1 + Reason: HorizontalScaling + Status: True + Type: HorizontalScaling + Last Transition Time: 2024-08-02T10:20:05Z + Message: get pod; ConditionStatus:True; PodName:kafka-dev-0 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-dev-0 + Last Transition Time: 2024-08-02T10:20:05Z + Message: evict pod; ConditionStatus:True; PodName:kafka-dev-0 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-dev-0 + Last Transition Time: 2024-08-02T10:20:15Z + Message: check pod running; ConditionStatus:True; PodName:kafka-dev-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-dev-0 + Last Transition Time: 2024-08-02T10:20:20Z + Message: get pod; ConditionStatus:True; PodName:kafka-dev-1 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-dev-1 + Last Transition Time: 2024-08-02T10:20:20Z + Message: evict pod; ConditionStatus:True; PodName:kafka-dev-1 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-dev-1 + Last Transition Time: 2024-08-02T10:21:00Z + Message: check pod running; ConditionStatus:True; PodName:kafka-dev-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-dev-1 + Last Transition Time: 2024-08-02T10:21:05Z + Message: Successfully restarted all nodes + Observed Generation: 1 + Reason: RestartNodes + Status: True + Type: RestartNodes + Last Transition Time: 2024-08-02T10:22:15Z + Message: Successfully Scaled Up Server Node + Observed Generation: 1 + Reason: ScaleUpCombined + Status: True + Type: ScaleUpCombined + Last Transition Time: 2024-08-02T10:21:10Z + Message: patch pet setkafka-dev; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: PatchPetSetkafka-dev + Last Transition Time: 2024-08-02T10:22:10Z + Message: node in cluster; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: NodeInCluster + Last Transition Time: 2024-08-02T10:22:15Z + Message: Successfully completed horizontally scale kafka cluster + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 4m34s KubeDB Ops-manager Operator Start processing for KafkaOpsRequest: demo/kfops-hscale-up-combined + Normal Starting 4m34s KubeDB Ops-manager Operator Pausing Kafka databse: demo/kafka-dev + Normal Successful 4m34s KubeDB Ops-manager Operator Successfully paused Kafka database: demo/kafka-dev for KafkaOpsRequest: kfops-hscale-up-combined + Warning get pod; ConditionStatus:True; PodName:kafka-dev-0 4m26s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-dev-0 + Warning evict pod; ConditionStatus:True; PodName:kafka-dev-0 4m26s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-dev-0 + Warning check pod running; ConditionStatus:False; PodName:kafka-dev-0 4m21s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-dev-0 + Warning check pod running; ConditionStatus:True; PodName:kafka-dev-0 4m16s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-dev-0 + Warning get pod; ConditionStatus:True; PodName:kafka-dev-1 4m11s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-dev-1 + Warning evict pod; ConditionStatus:True; PodName:kafka-dev-1 4m11s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-dev-1 + Warning check pod running; ConditionStatus:False; PodName:kafka-dev-1 4m6s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-dev-1 + Warning check pod running; ConditionStatus:True; PodName:kafka-dev-1 3m31s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-dev-1 + Normal RestartNodes 3m26s KubeDB Ops-manager Operator Successfully restarted all nodes + Warning patch pet setkafka-dev; ConditionStatus:True 3m21s KubeDB Ops-manager Operator patch pet setkafka-dev; ConditionStatus:True + Warning node in cluster; ConditionStatus:False 2m46s KubeDB Ops-manager Operator node in cluster; ConditionStatus:False + Warning node in cluster; ConditionStatus:True 2m21s KubeDB Ops-manager Operator node in cluster; ConditionStatus:True + Normal ScaleUpCombined 2m16s KubeDB Ops-manager Operator Successfully Scaled Up Server Node + Normal Starting 2m16s KubeDB Ops-manager Operator Resuming Kafka database: demo/kafka-dev + Normal Successful 2m16s KubeDB Ops-manager Operator Successfully resumed Kafka database: demo/kafka-dev for KafkaOpsRequest: kfops-hscale-up-combined +``` + +Now, we are going to verify the number of replicas this cluster has from the Kafka object, number of pods the petset have, + +```bash +$ kubectl get kafka -n demo kafka-dev -o json | jq '.spec.replicas' +3 + +$ kubectl get petset -n demo kafka-dev -o json | jq '.spec.replicas' +3 +``` + +Now let's connect to a kafka instance and run a kafka internal command to check the number of replicas, +```bash +$ kubectl exec -it -n demo kafka-dev-0 -- kafka-broker-api-versions.sh --bootstrap-server localhost:9092 --command-config config/clientauth.properties +kafka-dev-0.kafka-dev-pods.demo.svc.cluster.local:9092 (id: 0 rack: null) -> ( + Produce(0): 0 to 9 [usable: 9], + Fetch(1): 0 to 15 [usable: 15], + ListOffsets(2): 0 to 8 [usable: 8], + Metadata(3): 0 to 12 [usable: 12], + LeaderAndIsr(4): UNSUPPORTED, + StopReplica(5): UNSUPPORTED, + UpdateMetadata(6): UNSUPPORTED, + ControlledShutdown(7): UNSUPPORTED, + OffsetCommit(8): 0 to 8 [usable: 8], + OffsetFetch(9): 0 to 8 [usable: 8], + FindCoordinator(10): 0 to 4 [usable: 4], + JoinGroup(11): 0 to 9 [usable: 9], + Heartbeat(12): 0 to 4 [usable: 4], + LeaveGroup(13): 0 to 5 [usable: 5], + SyncGroup(14): 0 to 5 [usable: 5], + DescribeGroups(15): 0 to 5 [usable: 5], + ListGroups(16): 0 to 4 [usable: 4], + SaslHandshake(17): 0 to 1 [usable: 1], + ApiVersions(18): 0 to 3 [usable: 3], + CreateTopics(19): 0 to 7 [usable: 7], + DeleteTopics(20): 0 to 6 [usable: 6], + DeleteRecords(21): 0 to 2 [usable: 2], + InitProducerId(22): 0 to 4 [usable: 4], + OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], + AddPartitionsToTxn(24): 0 to 4 [usable: 4], + AddOffsetsToTxn(25): 0 to 3 [usable: 3], + EndTxn(26): 0 to 3 [usable: 3], + WriteTxnMarkers(27): 0 to 1 [usable: 1], + TxnOffsetCommit(28): 0 to 3 [usable: 3], + DescribeAcls(29): 0 to 3 [usable: 3], + CreateAcls(30): 0 to 3 [usable: 3], + DeleteAcls(31): 0 to 3 [usable: 3], + DescribeConfigs(32): 0 to 4 [usable: 4], + AlterConfigs(33): 0 to 2 [usable: 2], + AlterReplicaLogDirs(34): 0 to 2 [usable: 2], + DescribeLogDirs(35): 0 to 4 [usable: 4], + SaslAuthenticate(36): 0 to 2 [usable: 2], + CreatePartitions(37): 0 to 3 [usable: 3], + CreateDelegationToken(38): 0 to 3 [usable: 3], + RenewDelegationToken(39): 0 to 2 [usable: 2], + ExpireDelegationToken(40): 0 to 2 [usable: 2], + DescribeDelegationToken(41): 0 to 3 [usable: 3], + DeleteGroups(42): 0 to 2 [usable: 2], + ElectLeaders(43): 0 to 2 [usable: 2], + IncrementalAlterConfigs(44): 0 to 1 [usable: 1], + AlterPartitionReassignments(45): 0 [usable: 0], + ListPartitionReassignments(46): 0 [usable: 0], + OffsetDelete(47): 0 [usable: 0], + DescribeClientQuotas(48): 0 to 1 [usable: 1], + AlterClientQuotas(49): 0 to 1 [usable: 1], + DescribeUserScramCredentials(50): 0 [usable: 0], + AlterUserScramCredentials(51): 0 [usable: 0], + DescribeQuorum(55): 0 to 1 [usable: 1], + AlterPartition(56): UNSUPPORTED, + UpdateFeatures(57): 0 to 1 [usable: 1], + Envelope(58): UNSUPPORTED, + DescribeCluster(60): 0 [usable: 0], + DescribeProducers(61): 0 [usable: 0], + UnregisterBroker(64): 0 [usable: 0], + DescribeTransactions(65): 0 [usable: 0], + ListTransactions(66): 0 [usable: 0], + AllocateProducerIds(67): UNSUPPORTED, + ConsumerGroupHeartbeat(68): UNSUPPORTED +) +kafka-dev-1.kafka-dev-pods.demo.svc.cluster.local:9092 (id: 1 rack: null) -> ( + Produce(0): 0 to 9 [usable: 9], + Fetch(1): 0 to 15 [usable: 15], + ListOffsets(2): 0 to 8 [usable: 8], + Metadata(3): 0 to 12 [usable: 12], + LeaderAndIsr(4): UNSUPPORTED, + StopReplica(5): UNSUPPORTED, + UpdateMetadata(6): UNSUPPORTED, + ControlledShutdown(7): UNSUPPORTED, + OffsetCommit(8): 0 to 8 [usable: 8], + OffsetFetch(9): 0 to 8 [usable: 8], + FindCoordinator(10): 0 to 4 [usable: 4], + JoinGroup(11): 0 to 9 [usable: 9], + Heartbeat(12): 0 to 4 [usable: 4], + LeaveGroup(13): 0 to 5 [usable: 5], + SyncGroup(14): 0 to 5 [usable: 5], + DescribeGroups(15): 0 to 5 [usable: 5], + ListGroups(16): 0 to 4 [usable: 4], + SaslHandshake(17): 0 to 1 [usable: 1], + ApiVersions(18): 0 to 3 [usable: 3], + CreateTopics(19): 0 to 7 [usable: 7], + DeleteTopics(20): 0 to 6 [usable: 6], + DeleteRecords(21): 0 to 2 [usable: 2], + InitProducerId(22): 0 to 4 [usable: 4], + OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], + AddPartitionsToTxn(24): 0 to 4 [usable: 4], + AddOffsetsToTxn(25): 0 to 3 [usable: 3], + EndTxn(26): 0 to 3 [usable: 3], + WriteTxnMarkers(27): 0 to 1 [usable: 1], + TxnOffsetCommit(28): 0 to 3 [usable: 3], + DescribeAcls(29): 0 to 3 [usable: 3], + CreateAcls(30): 0 to 3 [usable: 3], + DeleteAcls(31): 0 to 3 [usable: 3], + DescribeConfigs(32): 0 to 4 [usable: 4], + AlterConfigs(33): 0 to 2 [usable: 2], + AlterReplicaLogDirs(34): 0 to 2 [usable: 2], + DescribeLogDirs(35): 0 to 4 [usable: 4], + SaslAuthenticate(36): 0 to 2 [usable: 2], + CreatePartitions(37): 0 to 3 [usable: 3], + CreateDelegationToken(38): 0 to 3 [usable: 3], + RenewDelegationToken(39): 0 to 2 [usable: 2], + ExpireDelegationToken(40): 0 to 2 [usable: 2], + DescribeDelegationToken(41): 0 to 3 [usable: 3], + DeleteGroups(42): 0 to 2 [usable: 2], + ElectLeaders(43): 0 to 2 [usable: 2], + IncrementalAlterConfigs(44): 0 to 1 [usable: 1], + AlterPartitionReassignments(45): 0 [usable: 0], + ListPartitionReassignments(46): 0 [usable: 0], + OffsetDelete(47): 0 [usable: 0], + DescribeClientQuotas(48): 0 to 1 [usable: 1], + AlterClientQuotas(49): 0 to 1 [usable: 1], + DescribeUserScramCredentials(50): 0 [usable: 0], + AlterUserScramCredentials(51): 0 [usable: 0], + DescribeQuorum(55): 0 to 1 [usable: 1], + AlterPartition(56): UNSUPPORTED, + UpdateFeatures(57): 0 to 1 [usable: 1], + Envelope(58): UNSUPPORTED, + DescribeCluster(60): 0 [usable: 0], + DescribeProducers(61): 0 [usable: 0], + UnregisterBroker(64): 0 [usable: 0], + DescribeTransactions(65): 0 [usable: 0], + ListTransactions(66): 0 [usable: 0], + AllocateProducerIds(67): UNSUPPORTED, + ConsumerGroupHeartbeat(68): UNSUPPORTED +) +kafka-dev-2.kafka-dev-pods.demo.svc.cluster.local:9092 (id: 2 rack: null) -> ( + Produce(0): 0 to 9 [usable: 9], + Fetch(1): 0 to 15 [usable: 15], + ListOffsets(2): 0 to 8 [usable: 8], + Metadata(3): 0 to 12 [usable: 12], + LeaderAndIsr(4): UNSUPPORTED, + StopReplica(5): UNSUPPORTED, + UpdateMetadata(6): UNSUPPORTED, + ControlledShutdown(7): UNSUPPORTED, + OffsetCommit(8): 0 to 8 [usable: 8], + OffsetFetch(9): 0 to 8 [usable: 8], + FindCoordinator(10): 0 to 4 [usable: 4], + JoinGroup(11): 0 to 9 [usable: 9], + Heartbeat(12): 0 to 4 [usable: 4], + LeaveGroup(13): 0 to 5 [usable: 5], + SyncGroup(14): 0 to 5 [usable: 5], + DescribeGroups(15): 0 to 5 [usable: 5], + ListGroups(16): 0 to 4 [usable: 4], + SaslHandshake(17): 0 to 1 [usable: 1], + ApiVersions(18): 0 to 3 [usable: 3], + CreateTopics(19): 0 to 7 [usable: 7], + DeleteTopics(20): 0 to 6 [usable: 6], + DeleteRecords(21): 0 to 2 [usable: 2], + InitProducerId(22): 0 to 4 [usable: 4], + OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], + AddPartitionsToTxn(24): 0 to 4 [usable: 4], + AddOffsetsToTxn(25): 0 to 3 [usable: 3], + EndTxn(26): 0 to 3 [usable: 3], + WriteTxnMarkers(27): 0 to 1 [usable: 1], + TxnOffsetCommit(28): 0 to 3 [usable: 3], + DescribeAcls(29): 0 to 3 [usable: 3], + CreateAcls(30): 0 to 3 [usable: 3], + DeleteAcls(31): 0 to 3 [usable: 3], + DescribeConfigs(32): 0 to 4 [usable: 4], + AlterConfigs(33): 0 to 2 [usable: 2], + AlterReplicaLogDirs(34): 0 to 2 [usable: 2], + DescribeLogDirs(35): 0 to 4 [usable: 4], + SaslAuthenticate(36): 0 to 2 [usable: 2], + CreatePartitions(37): 0 to 3 [usable: 3], + CreateDelegationToken(38): 0 to 3 [usable: 3], + RenewDelegationToken(39): 0 to 2 [usable: 2], + ExpireDelegationToken(40): 0 to 2 [usable: 2], + DescribeDelegationToken(41): 0 to 3 [usable: 3], + DeleteGroups(42): 0 to 2 [usable: 2], + ElectLeaders(43): 0 to 2 [usable: 2], + IncrementalAlterConfigs(44): 0 to 1 [usable: 1], + AlterPartitionReassignments(45): 0 [usable: 0], + ListPartitionReassignments(46): 0 [usable: 0], + OffsetDelete(47): 0 [usable: 0], + DescribeClientQuotas(48): 0 to 1 [usable: 1], + AlterClientQuotas(49): 0 to 1 [usable: 1], + DescribeUserScramCredentials(50): 0 [usable: 0], + AlterUserScramCredentials(51): 0 [usable: 0], + DescribeQuorum(55): 0 to 1 [usable: 1], + AlterPartition(56): UNSUPPORTED, + UpdateFeatures(57): 0 to 1 [usable: 1], + Envelope(58): UNSUPPORTED, + DescribeCluster(60): 0 [usable: 0], + DescribeProducers(61): 0 [usable: 0], + UnregisterBroker(64): 0 [usable: 0], + DescribeTransactions(65): 0 [usable: 0], + ListTransactions(66): 0 [usable: 0], + AllocateProducerIds(67): UNSUPPORTED, + ConsumerGroupHeartbeat(68): UNSUPPORTED +) +``` + +From all the above outputs we can see that the brokers of the combined kafka is `3`. That means we have successfully scaled up the replicas of the Kafka combined cluster. + +### Scale Down Replicas + +Here, we are going to scale down the replicas of the kafka combined cluster to meet the desired number of replicas after scaling. + +#### Create KafkaOpsRequest + +In order to scale down the replicas of the kafka combined cluster, we have to create a `KafkaOpsRequest` CR with our desired replicas. Below is the YAML of the `KafkaOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-hscale-down-combined + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: kafka-dev + horizontalScaling: + node: 2 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing horizontal scaling down operation on `kafka-dev` cluster. +- `spec.type` specifies that we are performing `HorizontalScaling` on kafka. +- `spec.horizontalScaling.node` specifies the desired replicas after scaling. + +Let's create the `KafkaOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/scaling/horizontal-scaling/kafka-hscale-down-combined.yaml +kafkaopsrequest.ops.kubedb.com/kfops-hscale-down-combined created +``` + +#### Verify Combined cluster replicas scaled down successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the replicas of `Kafka` object and related `PetSets` and `Pods`. + +Let's wait for `KafkaOpsRequest` to be `Successful`. Run the following command to watch `KafkaOpsRequest` CR, + +```bash +$ watch kubectl get kafkaopsrequest -n demo +NAME TYPE STATUS AGE +kfops-hscale-down-combined HorizontalScaling Successful 2m32s +``` + +We can see from the above output that the `KafkaOpsRequest` has succeeded. If we describe the `KafkaOpsRequest` we will get an overview of the steps that were followed to scale the cluster. + +```bash +$ kubectl describe kafkaopsrequests -n demo kfops-hscale-down-combined +Name: kfops-hscale-down-combined +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: KafkaOpsRequest +Metadata: + Creation Timestamp: 2024-08-02T10:46:39Z + Generation: 1 + Resource Version: 354924 + UID: f1a0b85d-1a86-463c-a3e4-72947badd108 +Spec: + Apply: IfReady + Database Ref: + Name: kafka-dev + Horizontal Scaling: + Node: 2 + Type: HorizontalScaling +Status: + Conditions: + Last Transition Time: 2024-08-02T10:46:39Z + Message: Kafka ops-request has started to horizontally scaling the nodes + Observed Generation: 1 + Reason: HorizontalScaling + Status: True + Type: HorizontalScaling + Last Transition Time: 2024-08-02T10:47:07Z + Message: Successfully Scaled Down Server Node + Observed Generation: 1 + Reason: ScaleDownCombined + Status: True + Type: ScaleDownCombined + Last Transition Time: 2024-08-02T10:46:57Z + Message: reassign partitions; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: ReassignPartitions + Last Transition Time: 2024-08-02T10:46:57Z + Message: is pet set patched; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: IsPetSetPatched + Last Transition Time: 2024-08-02T10:46:57Z + Message: get pod; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetPod + Last Transition Time: 2024-08-02T10:46:58Z + Message: delete pvc; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: DeletePvc + Last Transition Time: 2024-08-02T10:47:02Z + Message: get pvc; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetPvc + Last Transition Time: 2024-08-02T10:47:13Z + Message: successfully reconciled the Kafka with modified node + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2024-08-02T10:47:18Z + Message: get pod; ConditionStatus:True; PodName:kafka-dev-0 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-dev-0 + Last Transition Time: 2024-08-02T10:47:18Z + Message: evict pod; ConditionStatus:True; PodName:kafka-dev-0 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-dev-0 + Last Transition Time: 2024-08-02T10:47:28Z + Message: check pod running; ConditionStatus:True; PodName:kafka-dev-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-dev-0 + Last Transition Time: 2024-08-02T10:47:33Z + Message: get pod; ConditionStatus:True; PodName:kafka-dev-1 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-dev-1 + Last Transition Time: 2024-08-02T10:47:33Z + Message: evict pod; ConditionStatus:True; PodName:kafka-dev-1 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-dev-1 + Last Transition Time: 2024-08-02T10:48:53Z + Message: check pod running; ConditionStatus:True; PodName:kafka-dev-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-dev-1 + Last Transition Time: 2024-08-02T10:48:58Z + Message: Successfully restarted all nodes + Observed Generation: 1 + Reason: RestartNodes + Status: True + Type: RestartNodes + Last Transition Time: 2024-08-02T10:48:58Z + Message: Successfully completed horizontally scale kafka cluster + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 2m39s KubeDB Ops-manager Operator Start processing for KafkaOpsRequest: demo/kfops-hscale-down-combined + Normal Starting 2m39s KubeDB Ops-manager Operator Pausing Kafka databse: demo/kafka-dev + Normal Successful 2m39s KubeDB Ops-manager Operator Successfully paused Kafka database: demo/kafka-dev for KafkaOpsRequest: kfops-hscale-down-combined + Warning reassign partitions; ConditionStatus:True 2m21s KubeDB Ops-manager Operator reassign partitions; ConditionStatus:True + Warning is pet set patched; ConditionStatus:True 2m21s KubeDB Ops-manager Operator is pet set patched; ConditionStatus:True + Warning get pod; ConditionStatus:True 2m21s KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning delete pvc; ConditionStatus:True 2m20s KubeDB Ops-manager Operator delete pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:False 2m20s KubeDB Ops-manager Operator get pvc; ConditionStatus:False + Warning get pod; ConditionStatus:True 2m16s KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning delete pvc; ConditionStatus:True 2m16s KubeDB Ops-manager Operator delete pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 2m16s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Normal ScaleDownCombined 2m11s KubeDB Ops-manager Operator Successfully Scaled Down Server Node + Normal UpdatePetSets 2m5s KubeDB Ops-manager Operator successfully reconciled the Kafka with modified node + Warning get pod; ConditionStatus:True; PodName:kafka-dev-0 2m KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-dev-0 + Warning evict pod; ConditionStatus:True; PodName:kafka-dev-0 2m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-dev-0 + Warning check pod running; ConditionStatus:False; PodName:kafka-dev-0 115s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-dev-0 + Warning check pod running; ConditionStatus:True; PodName:kafka-dev-0 110s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-dev-0 + Warning get pod; ConditionStatus:True; PodName:kafka-dev-1 105s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-dev-1 + Warning evict pod; ConditionStatus:True; PodName:kafka-dev-1 105s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-dev-1 + Warning check pod running; ConditionStatus:False; PodName:kafka-dev-1 100s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-dev-1 + Warning check pod running; ConditionStatus:True; PodName:kafka-dev-1 25s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-dev-1 + Normal RestartNodes 20s KubeDB Ops-manager Operator Successfully restarted all nodes + Normal Starting 20s KubeDB Ops-manager Operator Resuming Kafka database: demo/kafka-dev + Normal Successful 20s KubeDB Ops-manager Operator Successfully resumed Kafka database: demo/kafka-dev for KafkaOpsRequest: kfops-hscale-down-combined +``` + +Now, we are going to verify the number of replicas this cluster has from the Kafka object, number of pods the petset have, + +```bash +$ kubectl get kafka -n demo kafka-dev -o json | jq '.spec.replicas' +2 + +$ kubectl get petset -n demo kafka-dev -o json | jq '.spec.replicas' +2 +``` + +Now let's connect to a kafka instance and run a kafka internal command to check the number of replicas, + +```bash +$ kubectl exec -it -n demo kafka-dev-0 -- kafka-broker-api-versions.sh --bootstrap-server localhost:9092 --command-config config/clientauth.properties +kafka-dev-0.kafka-dev-pods.demo.svc.cluster.local:9092 (id: 0 rack: null) -> ( + Produce(0): 0 to 9 [usable: 9], + Fetch(1): 0 to 15 [usable: 15], + ListOffsets(2): 0 to 8 [usable: 8], + Metadata(3): 0 to 12 [usable: 12], + LeaderAndIsr(4): UNSUPPORTED, + StopReplica(5): UNSUPPORTED, + UpdateMetadata(6): UNSUPPORTED, + ControlledShutdown(7): UNSUPPORTED, + OffsetCommit(8): 0 to 8 [usable: 8], + OffsetFetch(9): 0 to 8 [usable: 8], + FindCoordinator(10): 0 to 4 [usable: 4], + JoinGroup(11): 0 to 9 [usable: 9], + Heartbeat(12): 0 to 4 [usable: 4], + LeaveGroup(13): 0 to 5 [usable: 5], + SyncGroup(14): 0 to 5 [usable: 5], + DescribeGroups(15): 0 to 5 [usable: 5], + ListGroups(16): 0 to 4 [usable: 4], + SaslHandshake(17): 0 to 1 [usable: 1], + ApiVersions(18): 0 to 3 [usable: 3], + CreateTopics(19): 0 to 7 [usable: 7], + DeleteTopics(20): 0 to 6 [usable: 6], + DeleteRecords(21): 0 to 2 [usable: 2], + InitProducerId(22): 0 to 4 [usable: 4], + OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], + AddPartitionsToTxn(24): 0 to 4 [usable: 4], + AddOffsetsToTxn(25): 0 to 3 [usable: 3], + EndTxn(26): 0 to 3 [usable: 3], + WriteTxnMarkers(27): 0 to 1 [usable: 1], + TxnOffsetCommit(28): 0 to 3 [usable: 3], + DescribeAcls(29): 0 to 3 [usable: 3], + CreateAcls(30): 0 to 3 [usable: 3], + DeleteAcls(31): 0 to 3 [usable: 3], + DescribeConfigs(32): 0 to 4 [usable: 4], + AlterConfigs(33): 0 to 2 [usable: 2], + AlterReplicaLogDirs(34): 0 to 2 [usable: 2], + DescribeLogDirs(35): 0 to 4 [usable: 4], + SaslAuthenticate(36): 0 to 2 [usable: 2], + CreatePartitions(37): 0 to 3 [usable: 3], + CreateDelegationToken(38): 0 to 3 [usable: 3], + RenewDelegationToken(39): 0 to 2 [usable: 2], + ExpireDelegationToken(40): 0 to 2 [usable: 2], + DescribeDelegationToken(41): 0 to 3 [usable: 3], + DeleteGroups(42): 0 to 2 [usable: 2], + ElectLeaders(43): 0 to 2 [usable: 2], + IncrementalAlterConfigs(44): 0 to 1 [usable: 1], + AlterPartitionReassignments(45): 0 [usable: 0], + ListPartitionReassignments(46): 0 [usable: 0], + OffsetDelete(47): 0 [usable: 0], + DescribeClientQuotas(48): 0 to 1 [usable: 1], + AlterClientQuotas(49): 0 to 1 [usable: 1], + DescribeUserScramCredentials(50): 0 [usable: 0], + AlterUserScramCredentials(51): 0 [usable: 0], + DescribeQuorum(55): 0 to 1 [usable: 1], + AlterPartition(56): UNSUPPORTED, + UpdateFeatures(57): 0 to 1 [usable: 1], + Envelope(58): UNSUPPORTED, + DescribeCluster(60): 0 [usable: 0], + DescribeProducers(61): 0 [usable: 0], + UnregisterBroker(64): 0 [usable: 0], + DescribeTransactions(65): 0 [usable: 0], + ListTransactions(66): 0 [usable: 0], + AllocateProducerIds(67): UNSUPPORTED, + ConsumerGroupHeartbeat(68): UNSUPPORTED +) +kafka-dev-1.kafka-dev-pods.demo.svc.cluster.local:9092 (id: 1 rack: null) -> ( + Produce(0): 0 to 9 [usable: 9], + Fetch(1): 0 to 15 [usable: 15], + ListOffsets(2): 0 to 8 [usable: 8], + Metadata(3): 0 to 12 [usable: 12], + LeaderAndIsr(4): UNSUPPORTED, + StopReplica(5): UNSUPPORTED, + UpdateMetadata(6): UNSUPPORTED, + ControlledShutdown(7): UNSUPPORTED, + OffsetCommit(8): 0 to 8 [usable: 8], + OffsetFetch(9): 0 to 8 [usable: 8], + FindCoordinator(10): 0 to 4 [usable: 4], + JoinGroup(11): 0 to 9 [usable: 9], + Heartbeat(12): 0 to 4 [usable: 4], + LeaveGroup(13): 0 to 5 [usable: 5], + SyncGroup(14): 0 to 5 [usable: 5], + DescribeGroups(15): 0 to 5 [usable: 5], + ListGroups(16): 0 to 4 [usable: 4], + SaslHandshake(17): 0 to 1 [usable: 1], + ApiVersions(18): 0 to 3 [usable: 3], + CreateTopics(19): 0 to 7 [usable: 7], + DeleteTopics(20): 0 to 6 [usable: 6], + DeleteRecords(21): 0 to 2 [usable: 2], + InitProducerId(22): 0 to 4 [usable: 4], + OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], + AddPartitionsToTxn(24): 0 to 4 [usable: 4], + AddOffsetsToTxn(25): 0 to 3 [usable: 3], + EndTxn(26): 0 to 3 [usable: 3], + WriteTxnMarkers(27): 0 to 1 [usable: 1], + TxnOffsetCommit(28): 0 to 3 [usable: 3], + DescribeAcls(29): 0 to 3 [usable: 3], + CreateAcls(30): 0 to 3 [usable: 3], + DeleteAcls(31): 0 to 3 [usable: 3], + DescribeConfigs(32): 0 to 4 [usable: 4], + AlterConfigs(33): 0 to 2 [usable: 2], + AlterReplicaLogDirs(34): 0 to 2 [usable: 2], + DescribeLogDirs(35): 0 to 4 [usable: 4], + SaslAuthenticate(36): 0 to 2 [usable: 2], + CreatePartitions(37): 0 to 3 [usable: 3], + CreateDelegationToken(38): 0 to 3 [usable: 3], + RenewDelegationToken(39): 0 to 2 [usable: 2], + ExpireDelegationToken(40): 0 to 2 [usable: 2], + DescribeDelegationToken(41): 0 to 3 [usable: 3], + DeleteGroups(42): 0 to 2 [usable: 2], + ElectLeaders(43): 0 to 2 [usable: 2], + IncrementalAlterConfigs(44): 0 to 1 [usable: 1], + AlterPartitionReassignments(45): 0 [usable: 0], + ListPartitionReassignments(46): 0 [usable: 0], + OffsetDelete(47): 0 [usable: 0], + DescribeClientQuotas(48): 0 to 1 [usable: 1], + AlterClientQuotas(49): 0 to 1 [usable: 1], + DescribeUserScramCredentials(50): 0 [usable: 0], + AlterUserScramCredentials(51): 0 [usable: 0], + DescribeQuorum(55): 0 to 1 [usable: 1], + AlterPartition(56): UNSUPPORTED, + UpdateFeatures(57): 0 to 1 [usable: 1], + Envelope(58): UNSUPPORTED, + DescribeCluster(60): 0 [usable: 0], + DescribeProducers(61): 0 [usable: 0], + UnregisterBroker(64): 0 [usable: 0], + DescribeTransactions(65): 0 [usable: 0], + ListTransactions(66): 0 [usable: 0], + AllocateProducerIds(67): UNSUPPORTED, + ConsumerGroupHeartbeat(68): UNSUPPORTED +) +``` + +From all the above outputs we can see that the replicas of the combined cluster is `2`. That means we have successfully scaled down the replicas of the Kafka combined cluster. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete kf -n demo kafka-dev +kubectl delete kafkaopsrequest -n demo kfops-hscale-up-combined kfops-hscale-down-combined +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [Kafka object](/docs/guides/kafka/concepts/kafka.md). +- Different Kafka topology clustering modes [here](/docs/guides/kafka/clustering/_index.md). +- Monitor your Kafka with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). + +[//]: # (- Monitor your Kafka with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/kafka/monitoring/using-builtin-prometheus.md).) +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/kafka/scaling/horizontal-scaling/overview.md b/docs/guides/kafka/scaling/horizontal-scaling/overview.md new file mode 100644 index 0000000000..2f28bcea59 --- /dev/null +++ b/docs/guides/kafka/scaling/horizontal-scaling/overview.md @@ -0,0 +1,54 @@ +--- +title: Kafka Horizontal Scaling Overview +menu: + docs_{{ .version }}: + identifier: kf-horizontal-scaling-overview + name: Overview + parent: kf-horizontal-scaling + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Kafka Horizontal Scaling + +This guide will give an overview on how KubeDB Ops-manager operator scales up or down `Kafka` cluster replicas of various component such as Combined, Broker, Controller. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [Kafka](/docs/guides/kafka/concepts/kafka.md) + - [KafkaOpsRequest](/docs/guides/kafka/concepts/kafkaopsrequest.md) + +## How Horizontal Scaling Process Works + +The following diagram shows how KubeDB Ops-manager operator scales up or down `Kafka` database components. Open the image in a new tab to see the enlarged version. + +
+  Horizontal scaling process of Kafka +
Fig: Horizontal scaling process of Kafka
+
+ +The Horizontal scaling process consists of the following steps: + +1. At first, a user creates a `Kafka` Custom Resource (CR). + +2. `KubeDB` Provisioner operator watches the `Kafka` CR. + +3. When the operator finds a `Kafka` CR, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to scale the various components (ie. ReplicaSet, Shard, ConfigServer, Mongos, etc.) of the `Kafka` cluster, the user creates a `KafkaOpsRequest` CR with desired information. + +5. `KubeDB` Ops-manager operator watches the `KafkaOpsRequest` CR. + +6. When it finds a `KafkaOpsRequest` CR, it halts the `Kafka` object which is referred from the `KafkaOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `Kafka` object during the horizontal scaling process. + +7. Then the `KubeDB` Ops-manager operator will scale the related PetSet Pods to reach the expected number of replicas defined in the `KafkaOpsRequest` CR. + +8. After the successfully scaling the replicas of the related PetSet Pods, the `KubeDB` Ops-manager operator updates the number of replicas in the `Kafka` object to reflect the updated state. + +9. After the successful scaling of the `Kafka` replicas, the `KubeDB` Ops-manager operator resumes the `Kafka` object so that the `KubeDB` Provisioner operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on horizontal scaling of Kafka cluster using `KafkaOpsRequest` CRD. \ No newline at end of file diff --git a/docs/guides/kafka/scaling/horizontal-scaling/topology.md b/docs/guides/kafka/scaling/horizontal-scaling/topology.md new file mode 100644 index 0000000000..2ec7ce487f --- /dev/null +++ b/docs/guides/kafka/scaling/horizontal-scaling/topology.md @@ -0,0 +1,1151 @@ +--- +title: Horizontal Scaling Topology Kafka +menu: + docs_{{ .version }}: + identifier: kf-horizontal-scaling-topology + name: Topology Cluster + parent: kf-horizontal-scaling + weight: 20 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Horizontal Scale Kafka Topology Cluster + +This guide will show you how to use `KubeDB` Ops-manager operator to scale the Kafka topology cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [Kafka](/docs/guides/kafka/concepts/kafka.md) + - [Topology](/docs/guides/kafka/clustering/topology-cluster/index.md) + - [KafkaOpsRequest](/docs/guides/kafka/concepts/kafkaopsrequest.md) + - [Horizontal Scaling Overview](/docs/guides/kafka/scaling/horizontal-scaling/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/kafka](/docs/examples/kafka) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Apply Horizontal Scaling on Topology Cluster + +Here, we are going to deploy a `Kafka` topology cluster using a supported version by `KubeDB` operator. Then we are going to apply horizontal scaling on it. + +### Prepare Kafka Topology cluster + +Now, we are going to deploy a `Kafka` topology cluster with version `3.6.1`. + +### Deploy Kafka topology cluster + +In this section, we are going to deploy a Kafka topology cluster. Then, in the next section we will scale the cluster using `KafkaOpsRequest` CRD. Below is the YAML of the `Kafka` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-prod + namespace: demo +spec: + version: 3.6.1 + topology: + broker: + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + controller: + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut +``` + +Let's create the `Kafka` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/scaling/kafka-topology.yaml +kafka.kubedb.com/kafka-prod created +``` + +Now, wait until `kafka-prod` has status `Ready`. i.e, + +```bash +$ kubectl get kf -n demo -w +NAME TYPE VERSION STATUS AGE +kafka-prod kubedb.com/v1 3.6.1 Provisioning 0s +kafka-prod kubedb.com/v1 3.6.1 Provisioning 24s +. +. +kafka-prod kubedb.com/v1 3.6.1 Ready 92s +``` + +Let's check the number of replicas has from kafka object, number of pods the petset have, + +**Broker Replicas** + +```bash +$ kubectl get kafka -n demo kafka-prod -o json | jq '.spec.topology.broker.replicas' +2 + +$ kubectl get petset -n demo kafka-prod-broker -o json | jq '.spec.replicas' +2 +``` + +**Controller Replicas** + +```bash +$ kubectl get kafka -n demo kafka-prod -o json | jq '.spec.topology.controller.replicas' +2 + +$ kubectl get petset -n demo kafka-prod-controller -o json | jq '.spec.replicas' +2 +``` + +We can see from commands that the cluster has 2 replicas for both broker and controller. + +Also, we can verify the replicas of the topology from an internal kafka command by exec into a replica. + +Now let's exec to a broker instance and run a kafka internal command to check the number of replicas for broker and controller., + +**Broker** + +```bash +$ kubectl exec -it -n demo kafka-prod-broker-0 -- kafka-broker-api-versions.sh --bootstrap-server localhost:9092 --command-config config/clientauth.properties +kafka-prod-broker-0.kafka-prod-pods.demo.svc.cluster.local:9092 (id: 0 rack: null) -> ( + Produce(0): 0 to 9 [usable: 9], + Fetch(1): 0 to 15 [usable: 15], + ListOffsets(2): 0 to 8 [usable: 8], + Metadata(3): 0 to 12 [usable: 12], + LeaderAndIsr(4): UNSUPPORTED, + StopReplica(5): UNSUPPORTED, + UpdateMetadata(6): UNSUPPORTED, + ControlledShutdown(7): UNSUPPORTED, + OffsetCommit(8): 0 to 8 [usable: 8], + OffsetFetch(9): 0 to 8 [usable: 8], + FindCoordinator(10): 0 to 4 [usable: 4], + JoinGroup(11): 0 to 9 [usable: 9], + Heartbeat(12): 0 to 4 [usable: 4], + LeaveGroup(13): 0 to 5 [usable: 5], + SyncGroup(14): 0 to 5 [usable: 5], + DescribeGroups(15): 0 to 5 [usable: 5], + ListGroups(16): 0 to 4 [usable: 4], + SaslHandshake(17): 0 to 1 [usable: 1], + ApiVersions(18): 0 to 3 [usable: 3], + CreateTopics(19): 0 to 7 [usable: 7], + DeleteTopics(20): 0 to 6 [usable: 6], + DeleteRecords(21): 0 to 2 [usable: 2], + InitProducerId(22): 0 to 4 [usable: 4], + OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], + AddPartitionsToTxn(24): 0 to 4 [usable: 4], + AddOffsetsToTxn(25): 0 to 3 [usable: 3], + EndTxn(26): 0 to 3 [usable: 3], + WriteTxnMarkers(27): 0 to 1 [usable: 1], + TxnOffsetCommit(28): 0 to 3 [usable: 3], + DescribeAcls(29): 0 to 3 [usable: 3], + CreateAcls(30): 0 to 3 [usable: 3], + DeleteAcls(31): 0 to 3 [usable: 3], + DescribeConfigs(32): 0 to 4 [usable: 4], + AlterConfigs(33): 0 to 2 [usable: 2], + AlterReplicaLogDirs(34): 0 to 2 [usable: 2], + DescribeLogDirs(35): 0 to 4 [usable: 4], + SaslAuthenticate(36): 0 to 2 [usable: 2], + CreatePartitions(37): 0 to 3 [usable: 3], + CreateDelegationToken(38): 0 to 3 [usable: 3], + RenewDelegationToken(39): 0 to 2 [usable: 2], + ExpireDelegationToken(40): 0 to 2 [usable: 2], + DescribeDelegationToken(41): 0 to 3 [usable: 3], + DeleteGroups(42): 0 to 2 [usable: 2], + ElectLeaders(43): 0 to 2 [usable: 2], + IncrementalAlterConfigs(44): 0 to 1 [usable: 1], + AlterPartitionReassignments(45): 0 [usable: 0], + ListPartitionReassignments(46): 0 [usable: 0], + OffsetDelete(47): 0 [usable: 0], + DescribeClientQuotas(48): 0 to 1 [usable: 1], + AlterClientQuotas(49): 0 to 1 [usable: 1], + DescribeUserScramCredentials(50): 0 [usable: 0], + AlterUserScramCredentials(51): 0 [usable: 0], + DescribeQuorum(55): 0 to 1 [usable: 1], + AlterPartition(56): UNSUPPORTED, + UpdateFeatures(57): 0 to 1 [usable: 1], + Envelope(58): UNSUPPORTED, + DescribeCluster(60): 0 [usable: 0], + DescribeProducers(61): 0 [usable: 0], + UnregisterBroker(64): 0 [usable: 0], + DescribeTransactions(65): 0 [usable: 0], + ListTransactions(66): 0 [usable: 0], + AllocateProducerIds(67): UNSUPPORTED, + ConsumerGroupHeartbeat(68): UNSUPPORTED +) +kafka-prod-broker-1.kafka-prod-pods.demo.svc.cluster.local:9092 (id: 1 rack: null) -> ( + Produce(0): 0 to 9 [usable: 9], + Fetch(1): 0 to 15 [usable: 15], + ListOffsets(2): 0 to 8 [usable: 8], + Metadata(3): 0 to 12 [usable: 12], + LeaderAndIsr(4): UNSUPPORTED, + StopReplica(5): UNSUPPORTED, + UpdateMetadata(6): UNSUPPORTED, + ControlledShutdown(7): UNSUPPORTED, + OffsetCommit(8): 0 to 8 [usable: 8], + OffsetFetch(9): 0 to 8 [usable: 8], + FindCoordinator(10): 0 to 4 [usable: 4], + JoinGroup(11): 0 to 9 [usable: 9], + Heartbeat(12): 0 to 4 [usable: 4], + LeaveGroup(13): 0 to 5 [usable: 5], + SyncGroup(14): 0 to 5 [usable: 5], + DescribeGroups(15): 0 to 5 [usable: 5], + ListGroups(16): 0 to 4 [usable: 4], + SaslHandshake(17): 0 to 1 [usable: 1], + ApiVersions(18): 0 to 3 [usable: 3], + CreateTopics(19): 0 to 7 [usable: 7], + DeleteTopics(20): 0 to 6 [usable: 6], + DeleteRecords(21): 0 to 2 [usable: 2], + InitProducerId(22): 0 to 4 [usable: 4], + OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], + AddPartitionsToTxn(24): 0 to 4 [usable: 4], + AddOffsetsToTxn(25): 0 to 3 [usable: 3], + EndTxn(26): 0 to 3 [usable: 3], + WriteTxnMarkers(27): 0 to 1 [usable: 1], + TxnOffsetCommit(28): 0 to 3 [usable: 3], + DescribeAcls(29): 0 to 3 [usable: 3], + CreateAcls(30): 0 to 3 [usable: 3], + DeleteAcls(31): 0 to 3 [usable: 3], + DescribeConfigs(32): 0 to 4 [usable: 4], + AlterConfigs(33): 0 to 2 [usable: 2], + AlterReplicaLogDirs(34): 0 to 2 [usable: 2], + DescribeLogDirs(35): 0 to 4 [usable: 4], + SaslAuthenticate(36): 0 to 2 [usable: 2], + CreatePartitions(37): 0 to 3 [usable: 3], + CreateDelegationToken(38): 0 to 3 [usable: 3], + RenewDelegationToken(39): 0 to 2 [usable: 2], + ExpireDelegationToken(40): 0 to 2 [usable: 2], + DescribeDelegationToken(41): 0 to 3 [usable: 3], + DeleteGroups(42): 0 to 2 [usable: 2], + ElectLeaders(43): 0 to 2 [usable: 2], + IncrementalAlterConfigs(44): 0 to 1 [usable: 1], + AlterPartitionReassignments(45): 0 [usable: 0], + ListPartitionReassignments(46): 0 [usable: 0], + OffsetDelete(47): 0 [usable: 0], + DescribeClientQuotas(48): 0 to 1 [usable: 1], + AlterClientQuotas(49): 0 to 1 [usable: 1], + DescribeUserScramCredentials(50): 0 [usable: 0], + AlterUserScramCredentials(51): 0 [usable: 0], + DescribeQuorum(55): 0 to 1 [usable: 1], + AlterPartition(56): UNSUPPORTED, + UpdateFeatures(57): 0 to 1 [usable: 1], + Envelope(58): UNSUPPORTED, + DescribeCluster(60): 0 [usable: 0], + DescribeProducers(61): 0 [usable: 0], + UnregisterBroker(64): 0 [usable: 0], + DescribeTransactions(65): 0 [usable: 0], + ListTransactions(66): 0 [usable: 0], + AllocateProducerIds(67): UNSUPPORTED, + ConsumerGroupHeartbeat(68): UNSUPPORTED +) +``` + +**Controller** + +```bash +$ kubectl exec -it -n demo kafka-prod-broker-0 -- kafka-metadata-quorum.sh --bootstrap-server localhost:9092 --command-config config/clientauth.properties describe --status | grep CurrentObservers +CurrentObservers: [0,1] +``` + +We can see from the above output that the kafka has 2 nodes for broker and 2 nodes for controller. + +We are now ready to apply the `KafkaOpsRequest` CR to scale this cluster. + +## Scale Up Replicas + +Here, we are going to scale up the replicas of the topology cluster to meet the desired number of replicas after scaling. + +#### Create KafkaOpsRequest + +In order to scale up the replicas of the topology cluster, we have to create a `KafkaOpsRequest` CR with our desired replicas. Below is the YAML of the `KafkaOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-hscale-up-topology + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: kafka-prod + horizontalScaling: + topology: + broker: 3 + controller: 3 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing horizontal scaling operation on `kafka-prod` cluster. +- `spec.type` specifies that we are performing `HorizontalScaling` on kafka. +- `spec.horizontalScaling.topology.broker` specifies the desired replicas after scaling for broker. +- `spec.horizontalScaling.topology.controller` specifies the desired replicas after scaling for controller. + +Let's create the `KafkaOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/scaling/horizontal-scaling/kafka-hscale-up-topology.yaml +kafkaopsrequest.ops.kubedb.com/kfops-hscale-up-topology created +``` + +> **Note:** If you want to scale down only broker or controller, you can specify the desired replicas for only broker or controller in the `KafkaOpsRequest` CR. You can specify one at a time. If you want to scale broker only, no node will need restart to apply the changes. But if you want to scale controller, all nodes will need restart to apply the changes. + +#### Verify Topology cluster replicas scaled up successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the replicas of `Kafka` object and related `PetSets` and `Pods`. + +Let's wait for `KafkaOpsRequest` to be `Successful`. Run the following command to watch `KafkaOpsRequest` CR, + +```bash +$ watch kubectl get kafkaopsrequest -n demo +NAME TYPE STATUS AGE +kfops-hscale-up-topology HorizontalScaling Successful 106s +``` + +We can see from the above output that the `KafkaOpsRequest` has succeeded. If we describe the `KafkaOpsRequest` we will get an overview of the steps that were followed to scale the cluster. + +```bash +$ kubectl describe kafkaopsrequests -n demo kfops-hscale-up-topology +Name: kfops-hscale-up-topology +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: KafkaOpsRequest +Metadata: + Creation Timestamp: 2024-08-02T11:02:51Z + Generation: 1 + Resource Version: 356503 + UID: 44e0db0c-2094-4c13-a3be-9ca680888545 +Spec: + Apply: IfReady + Database Ref: + Name: kafka-prod + Horizontal Scaling: + Topology: + Broker: 3 + Controller: 3 + Type: HorizontalScaling +Status: + Conditions: + Last Transition Time: 2024-08-02T11:02:51Z + Message: Kafka ops-request has started to horizontally scaling the nodes + Observed Generation: 1 + Reason: HorizontalScaling + Status: True + Type: HorizontalScaling + Last Transition Time: 2024-08-02T11:02:59Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-controller-0 + Last Transition Time: 2024-08-02T11:03:00Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-controller-0 + Last Transition Time: 2024-08-02T11:03:09Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-controller-0 + Last Transition Time: 2024-08-02T11:03:14Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-controller-1 + Last Transition Time: 2024-08-02T11:03:14Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-controller-1 + Last Transition Time: 2024-08-02T11:03:24Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-controller-1 + Last Transition Time: 2024-08-02T11:03:29Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-broker-0 + Last Transition Time: 2024-08-02T11:03:30Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-broker-0 + Last Transition Time: 2024-08-02T11:03:59Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-broker-0 + Last Transition Time: 2024-08-02T11:04:04Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-broker-1 + Last Transition Time: 2024-08-02T11:04:05Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-broker-1 + Last Transition Time: 2024-08-02T11:04:19Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-broker-1 + Last Transition Time: 2024-08-02T11:04:24Z + Message: Successfully restarted all nodes + Observed Generation: 1 + Reason: RestartNodes + Status: True + Type: RestartNodes + Last Transition Time: 2024-08-02T11:04:59Z + Message: Successfully Scaled Up Broker + Observed Generation: 1 + Reason: ScaleUpBroker + Status: True + Type: ScaleUpBroker + Last Transition Time: 2024-08-02T11:04:30Z + Message: patch pet setkafka-prod-broker; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: PatchPetSetkafka-prod-broker + Last Transition Time: 2024-08-02T11:04:55Z + Message: node in cluster; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: NodeInCluster + Last Transition Time: 2024-08-02T11:05:15Z + Message: Successfully Scaled Up Controller + Observed Generation: 1 + Reason: ScaleUpController + Status: True + Type: ScaleUpController + Last Transition Time: 2024-08-02T11:05:05Z + Message: patch pet setkafka-prod-controller; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: PatchPetSetkafka-prod-controller + Last Transition Time: 2024-08-02T11:05:15Z + Message: Successfully completed horizontally scale kafka cluster + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 4m19s KubeDB Ops-manager Operator Start processing for KafkaOpsRequest: demo/kfops-hscale-up-topology + Normal Starting 4m19s KubeDB Ops-manager Operator Pausing Kafka databse: demo/kafka-prod + Normal Successful 4m19s KubeDB Ops-manager Operator Successfully paused Kafka database: demo/kafka-prod for KafkaOpsRequest: kfops-hscale-up-topology + Warning get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 4m11s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 4m10s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-controller-0 4m6s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-controller-0 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 4m1s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 + Warning get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 3m56s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 3m56s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-controller-1 3m50s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-controller-1 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 3m46s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 + Warning get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 3m41s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 3m41s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-broker-0 3m36s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-broker-0 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 3m11s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 + Warning get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 3m6s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 3m5s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-broker-1 3m1s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-broker-1 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 2m51s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 + Normal RestartNodes 2m46s KubeDB Ops-manager Operator Successfully restarted all nodes + Warning patch pet setkafka-prod-broker; ConditionStatus:True 2m40s KubeDB Ops-manager Operator patch pet setkafka-prod-broker; ConditionStatus:True + Warning node in cluster; ConditionStatus:False 2m36s KubeDB Ops-manager Operator node in cluster; ConditionStatus:False + Warning node in cluster; ConditionStatus:True 2m15s KubeDB Ops-manager Operator node in cluster; ConditionStatus:True + Normal ScaleUpBroker 2m10s KubeDB Ops-manager Operator Successfully Scaled Up Broker + Warning patch pet setkafka-prod-controller; ConditionStatus:True 2m5s KubeDB Ops-manager Operator patch pet setkafka-prod-controller; ConditionStatus:True + Warning node in cluster; ConditionStatus:True 2m KubeDB Ops-manager Operator node in cluster; ConditionStatus:True + Normal ScaleUpController 115s KubeDB Ops-manager Operator Successfully Scaled Up Controller + Normal Starting 115s KubeDB Ops-manager Operator Resuming Kafka database: demo/kafka-prod + Normal Successful 115s KubeDB Ops-manager Operator Successfully resumed Kafka database: demo/kafka-prod for KafkaOpsRequest: kfops-hscale-up-topology +``` + +Now, we are going to verify the number of replicas this cluster has from the Kafka object, number of pods the petset have, + +**Broker Replicas** + +```bash +$ kubectl get kafka -n demo kafka-prod -o json | jq '.spec.topology.broker.replicas' +3 + +$ kubectl get petset -n demo kafka-prod-broker -o json | jq '.spec.replicas' +3 +``` + +Now let's connect to a kafka instance and run a kafka internal command to check the number of replicas of topology cluster for both broker and controller., + +**Broker** + +```bash +$ kubectl exec -it -n demo kafka-prod-broker-0 -- kafka-broker-api-versions.sh --bootstrap-server localhost:9092 --command-config config/clientauth.properties +kafka-prod-broker-0.kafka-prod-pods.demo.svc.cluster.local:9092 (id: 0 rack: null) -> ( + Produce(0): 0 to 9 [usable: 9], + Fetch(1): 0 to 15 [usable: 15], + ListOffsets(2): 0 to 8 [usable: 8], + Metadata(3): 0 to 12 [usable: 12], + LeaderAndIsr(4): UNSUPPORTED, + StopReplica(5): UNSUPPORTED, + UpdateMetadata(6): UNSUPPORTED, + ControlledShutdown(7): UNSUPPORTED, + OffsetCommit(8): 0 to 8 [usable: 8], + OffsetFetch(9): 0 to 8 [usable: 8], + FindCoordinator(10): 0 to 4 [usable: 4], + JoinGroup(11): 0 to 9 [usable: 9], + Heartbeat(12): 0 to 4 [usable: 4], + LeaveGroup(13): 0 to 5 [usable: 5], + SyncGroup(14): 0 to 5 [usable: 5], + DescribeGroups(15): 0 to 5 [usable: 5], + ListGroups(16): 0 to 4 [usable: 4], + SaslHandshake(17): 0 to 1 [usable: 1], + ApiVersions(18): 0 to 3 [usable: 3], + CreateTopics(19): 0 to 7 [usable: 7], + DeleteTopics(20): 0 to 6 [usable: 6], + DeleteRecords(21): 0 to 2 [usable: 2], + InitProducerId(22): 0 to 4 [usable: 4], + OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], + AddPartitionsToTxn(24): 0 to 4 [usable: 4], + AddOffsetsToTxn(25): 0 to 3 [usable: 3], + EndTxn(26): 0 to 3 [usable: 3], + WriteTxnMarkers(27): 0 to 1 [usable: 1], + TxnOffsetCommit(28): 0 to 3 [usable: 3], + DescribeAcls(29): 0 to 3 [usable: 3], + CreateAcls(30): 0 to 3 [usable: 3], + DeleteAcls(31): 0 to 3 [usable: 3], + DescribeConfigs(32): 0 to 4 [usable: 4], + AlterConfigs(33): 0 to 2 [usable: 2], + AlterReplicaLogDirs(34): 0 to 2 [usable: 2], + DescribeLogDirs(35): 0 to 4 [usable: 4], + SaslAuthenticate(36): 0 to 2 [usable: 2], + CreatePartitions(37): 0 to 3 [usable: 3], + CreateDelegationToken(38): 0 to 3 [usable: 3], + RenewDelegationToken(39): 0 to 2 [usable: 2], + ExpireDelegationToken(40): 0 to 2 [usable: 2], + DescribeDelegationToken(41): 0 to 3 [usable: 3], + DeleteGroups(42): 0 to 2 [usable: 2], + ElectLeaders(43): 0 to 2 [usable: 2], + IncrementalAlterConfigs(44): 0 to 1 [usable: 1], + AlterPartitionReassignments(45): 0 [usable: 0], + ListPartitionReassignments(46): 0 [usable: 0], + OffsetDelete(47): 0 [usable: 0], + DescribeClientQuotas(48): 0 to 1 [usable: 1], + AlterClientQuotas(49): 0 to 1 [usable: 1], + DescribeUserScramCredentials(50): 0 [usable: 0], + AlterUserScramCredentials(51): 0 [usable: 0], + DescribeQuorum(55): 0 to 1 [usable: 1], + AlterPartition(56): UNSUPPORTED, + UpdateFeatures(57): 0 to 1 [usable: 1], + Envelope(58): UNSUPPORTED, + DescribeCluster(60): 0 [usable: 0], + DescribeProducers(61): 0 [usable: 0], + UnregisterBroker(64): 0 [usable: 0], + DescribeTransactions(65): 0 [usable: 0], + ListTransactions(66): 0 [usable: 0], + AllocateProducerIds(67): UNSUPPORTED, + ConsumerGroupHeartbeat(68): UNSUPPORTED +) +kafka-prod-broker-1.kafka-prod-pods.demo.svc.cluster.local:9092 (id: 1 rack: null) -> ( + Produce(0): 0 to 9 [usable: 9], + Fetch(1): 0 to 15 [usable: 15], + ListOffsets(2): 0 to 8 [usable: 8], + Metadata(3): 0 to 12 [usable: 12], + LeaderAndIsr(4): UNSUPPORTED, + StopReplica(5): UNSUPPORTED, + UpdateMetadata(6): UNSUPPORTED, + ControlledShutdown(7): UNSUPPORTED, + OffsetCommit(8): 0 to 8 [usable: 8], + OffsetFetch(9): 0 to 8 [usable: 8], + FindCoordinator(10): 0 to 4 [usable: 4], + JoinGroup(11): 0 to 9 [usable: 9], + Heartbeat(12): 0 to 4 [usable: 4], + LeaveGroup(13): 0 to 5 [usable: 5], + SyncGroup(14): 0 to 5 [usable: 5], + DescribeGroups(15): 0 to 5 [usable: 5], + ListGroups(16): 0 to 4 [usable: 4], + SaslHandshake(17): 0 to 1 [usable: 1], + ApiVersions(18): 0 to 3 [usable: 3], + CreateTopics(19): 0 to 7 [usable: 7], + DeleteTopics(20): 0 to 6 [usable: 6], + DeleteRecords(21): 0 to 2 [usable: 2], + InitProducerId(22): 0 to 4 [usable: 4], + OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], + AddPartitionsToTxn(24): 0 to 4 [usable: 4], + AddOffsetsToTxn(25): 0 to 3 [usable: 3], + EndTxn(26): 0 to 3 [usable: 3], + WriteTxnMarkers(27): 0 to 1 [usable: 1], + TxnOffsetCommit(28): 0 to 3 [usable: 3], + DescribeAcls(29): 0 to 3 [usable: 3], + CreateAcls(30): 0 to 3 [usable: 3], + DeleteAcls(31): 0 to 3 [usable: 3], + DescribeConfigs(32): 0 to 4 [usable: 4], + AlterConfigs(33): 0 to 2 [usable: 2], + AlterReplicaLogDirs(34): 0 to 2 [usable: 2], + DescribeLogDirs(35): 0 to 4 [usable: 4], + SaslAuthenticate(36): 0 to 2 [usable: 2], + CreatePartitions(37): 0 to 3 [usable: 3], + CreateDelegationToken(38): 0 to 3 [usable: 3], + RenewDelegationToken(39): 0 to 2 [usable: 2], + ExpireDelegationToken(40): 0 to 2 [usable: 2], + DescribeDelegationToken(41): 0 to 3 [usable: 3], + DeleteGroups(42): 0 to 2 [usable: 2], + ElectLeaders(43): 0 to 2 [usable: 2], + IncrementalAlterConfigs(44): 0 to 1 [usable: 1], + AlterPartitionReassignments(45): 0 [usable: 0], + ListPartitionReassignments(46): 0 [usable: 0], + OffsetDelete(47): 0 [usable: 0], + DescribeClientQuotas(48): 0 to 1 [usable: 1], + AlterClientQuotas(49): 0 to 1 [usable: 1], + DescribeUserScramCredentials(50): 0 [usable: 0], + AlterUserScramCredentials(51): 0 [usable: 0], + DescribeQuorum(55): 0 to 1 [usable: 1], + AlterPartition(56): UNSUPPORTED, + UpdateFeatures(57): 0 to 1 [usable: 1], + Envelope(58): UNSUPPORTED, + DescribeCluster(60): 0 [usable: 0], + DescribeProducers(61): 0 [usable: 0], + UnregisterBroker(64): 0 [usable: 0], + DescribeTransactions(65): 0 [usable: 0], + ListTransactions(66): 0 [usable: 0], + AllocateProducerIds(67): UNSUPPORTED, + ConsumerGroupHeartbeat(68): UNSUPPORTED +) +kafka-prod-broker-2.kafka-prod-pods.demo.svc.cluster.local:9092 (id: 2 rack: null) -> ( + Produce(0): 0 to 9 [usable: 9], + Fetch(1): 0 to 15 [usable: 15], + ListOffsets(2): 0 to 8 [usable: 8], + Metadata(3): 0 to 12 [usable: 12], + LeaderAndIsr(4): UNSUPPORTED, + StopReplica(5): UNSUPPORTED, + UpdateMetadata(6): UNSUPPORTED, + ControlledShutdown(7): UNSUPPORTED, + OffsetCommit(8): 0 to 8 [usable: 8], + OffsetFetch(9): 0 to 8 [usable: 8], + FindCoordinator(10): 0 to 4 [usable: 4], + JoinGroup(11): 0 to 9 [usable: 9], + Heartbeat(12): 0 to 4 [usable: 4], + LeaveGroup(13): 0 to 5 [usable: 5], + SyncGroup(14): 0 to 5 [usable: 5], + DescribeGroups(15): 0 to 5 [usable: 5], + ListGroups(16): 0 to 4 [usable: 4], + SaslHandshake(17): 0 to 1 [usable: 1], + ApiVersions(18): 0 to 3 [usable: 3], + CreateTopics(19): 0 to 7 [usable: 7], + DeleteTopics(20): 0 to 6 [usable: 6], + DeleteRecords(21): 0 to 2 [usable: 2], + InitProducerId(22): 0 to 4 [usable: 4], + OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], + AddPartitionsToTxn(24): 0 to 4 [usable: 4], + AddOffsetsToTxn(25): 0 to 3 [usable: 3], + EndTxn(26): 0 to 3 [usable: 3], + WriteTxnMarkers(27): 0 to 1 [usable: 1], + TxnOffsetCommit(28): 0 to 3 [usable: 3], + DescribeAcls(29): 0 to 3 [usable: 3], + CreateAcls(30): 0 to 3 [usable: 3], + DeleteAcls(31): 0 to 3 [usable: 3], + DescribeConfigs(32): 0 to 4 [usable: 4], + AlterConfigs(33): 0 to 2 [usable: 2], + AlterReplicaLogDirs(34): 0 to 2 [usable: 2], + DescribeLogDirs(35): 0 to 4 [usable: 4], + SaslAuthenticate(36): 0 to 2 [usable: 2], + CreatePartitions(37): 0 to 3 [usable: 3], + CreateDelegationToken(38): 0 to 3 [usable: 3], + RenewDelegationToken(39): 0 to 2 [usable: 2], + ExpireDelegationToken(40): 0 to 2 [usable: 2], + DescribeDelegationToken(41): 0 to 3 [usable: 3], + DeleteGroups(42): 0 to 2 [usable: 2], + ElectLeaders(43): 0 to 2 [usable: 2], + IncrementalAlterConfigs(44): 0 to 1 [usable: 1], + AlterPartitionReassignments(45): 0 [usable: 0], + ListPartitionReassignments(46): 0 [usable: 0], + OffsetDelete(47): 0 [usable: 0], + DescribeClientQuotas(48): 0 to 1 [usable: 1], + AlterClientQuotas(49): 0 to 1 [usable: 1], + DescribeUserScramCredentials(50): 0 [usable: 0], + AlterUserScramCredentials(51): 0 [usable: 0], + DescribeQuorum(55): 0 to 1 [usable: 1], + AlterPartition(56): UNSUPPORTED, + UpdateFeatures(57): 0 to 1 [usable: 1], + Envelope(58): UNSUPPORTED, + DescribeCluster(60): 0 [usable: 0], + DescribeProducers(61): 0 [usable: 0], + UnregisterBroker(64): 0 [usable: 0], + DescribeTransactions(65): 0 [usable: 0], + ListTransactions(66): 0 [usable: 0], + AllocateProducerIds(67): UNSUPPORTED, + ConsumerGroupHeartbeat(68): UNSUPPORTED +) +``` + +**Controller** + +```bash +$ kubectl exec -it -n demo kafka-prod-broker-0 -- kafka-metadata-quorum.sh --bootstrap-server localhost:9092 --command-config config/clientauth.properties describe --status | grep CurrentObservers +CurrentObservers: [0,1,2] +``` + +From all the above outputs we can see that the both brokers and controller of the topology kafka is `3`. That means we have successfully scaled up the replicas of the Kafka topology cluster. + +### Scale Down Replicas + +Here, we are going to scale down the replicas of the kafka topology cluster to meet the desired number of replicas after scaling. + +#### Create KafkaOpsRequest + +In order to scale down the replicas of the kafka topology cluster, we have to create a `KafkaOpsRequest` CR with our desired replicas. Below is the YAML of the `KafkaOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-hscale-down-topology + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: kafka-prod + horizontalScaling: + topology: + broker: 2 + controller: 2 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing horizontal scaling down operation on `kafka-prod` cluster. +- `spec.type` specifies that we are performing `HorizontalScaling` on kafka. +- `spec.horizontalScaling.topology.broker` specifies the desired replicas after scaling for the broker nodes. +- `spec.horizontalScaling.topology.controller` specifies the desired replicas after scaling for the controller nodes. + +Let's create the `KafkaOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/scaling/horizontal-scaling/kafka-hscale-down-topology.yaml +kafkaopsrequest.ops.kubedb.com/kfops-hscale-down-topology created +``` + +#### Verify Topology cluster replicas scaled down successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the replicas of `Kafka` object and related `PetSets` and `Pods`. + +Let's wait for `KafkaOpsRequest` to be `Successful`. Run the following command to watch `KafkaOpsRequest` CR, + +```bash +$ watch kubectl get kafkaopsrequest -n demo +NAME TYPE STATUS AGE +kfops-hscale-down-topology HorizontalScaling Successful 2m32s +``` + +We can see from the above output that the `KafkaOpsRequest` has succeeded. If we describe the `KafkaOpsRequest` we will get an overview of the steps that were followed to scale the cluster. + +```bash +$ kubectl describe kafkaopsrequests -n demo kfops-hscale-down-topology +Name: kfops-hscale-down-topology +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: KafkaOpsRequest +Metadata: + Creation Timestamp: 2024-08-02T11:14:18Z + Generation: 1 + Resource Version: 357545 + UID: b786d791-6ba8-4f1c-ade8-9443e049cede +Spec: + Apply: IfReady + Database Ref: + Name: kafka-prod + Horizontal Scaling: + Topology: + Broker: 2 + Controller: 2 + Type: HorizontalScaling +Status: + Conditions: + Last Transition Time: 2024-08-02T11:14:18Z + Message: Kafka ops-request has started to horizontally scaling the nodes + Observed Generation: 1 + Reason: HorizontalScaling + Status: True + Type: HorizontalScaling + Last Transition Time: 2024-08-02T11:14:46Z + Message: Successfully Scaled Down Broker + Observed Generation: 1 + Reason: ScaleDownBroker + Status: True + Type: ScaleDownBroker + Last Transition Time: 2024-08-02T11:14:36Z + Message: reassign partitions; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: ReassignPartitions + Last Transition Time: 2024-08-02T11:14:36Z + Message: is pet set patched; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: IsPetSetPatched + Last Transition Time: 2024-08-02T11:14:37Z + Message: get pod; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetPod + Last Transition Time: 2024-08-02T11:14:37Z + Message: delete pvc; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: DeletePvc + Last Transition Time: 2024-08-02T11:15:26Z + Message: get pvc; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetPvc + Last Transition Time: 2024-08-02T11:15:31Z + Message: Successfully Scaled Down Controller + Observed Generation: 1 + Reason: ScaleDownController + Status: True + Type: ScaleDownController + Last Transition Time: 2024-08-02T11:15:38Z + Message: successfully reconciled the Kafka with modified node + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2024-08-02T11:15:43Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-controller-0 + Last Transition Time: 2024-08-02T11:15:44Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-controller-0 + Last Transition Time: 2024-08-02T11:15:53Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-controller-0 + Last Transition Time: 2024-08-02T11:15:58Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-controller-1 + Last Transition Time: 2024-08-02T11:15:58Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-controller-1 + Last Transition Time: 2024-08-02T11:16:08Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-controller-1 + Last Transition Time: 2024-08-02T11:16:13Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-broker-0 + Last Transition Time: 2024-08-02T11:16:13Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-broker-0 + Last Transition Time: 2024-08-02T11:16:58Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-broker-0 + Last Transition Time: 2024-08-02T11:17:03Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-broker-1 + Last Transition Time: 2024-08-02T11:17:03Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-broker-1 + Last Transition Time: 2024-08-02T11:17:13Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-broker-1 + Last Transition Time: 2024-08-02T11:17:18Z + Message: Successfully restarted all nodes + Observed Generation: 1 + Reason: RestartNodes + Status: True + Type: RestartNodes + Last Transition Time: 2024-08-02T11:17:19Z + Message: Successfully completed horizontally scale kafka cluster + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 8m35s KubeDB Ops-manager Operator Start processing for KafkaOpsRequest: demo/kfops-hscale-down-topology + Normal Starting 8m35s KubeDB Ops-manager Operator Pausing Kafka databse: demo/kafka-prod + Normal Successful 8m35s KubeDB Ops-manager Operator Successfully paused Kafka database: demo/kafka-prod for KafkaOpsRequest: kfops-hscale-down-topology + Warning reassign partitions; ConditionStatus:True 8m17s KubeDB Ops-manager Operator reassign partitions; ConditionStatus:True + Warning is pet set patched; ConditionStatus:True 8m17s KubeDB Ops-manager Operator is pet set patched; ConditionStatus:True + Warning get pod; ConditionStatus:True 8m16s KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning delete pvc; ConditionStatus:True 8m16s KubeDB Ops-manager Operator delete pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:False 8m16s KubeDB Ops-manager Operator get pvc; ConditionStatus:False + Warning get pod; ConditionStatus:True 8m12s KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning delete pvc; ConditionStatus:True 8m12s KubeDB Ops-manager Operator delete pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 8m12s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Normal ScaleDownBroker 8m7s KubeDB Ops-manager Operator Successfully Scaled Down Broker + Warning reassign partitions; ConditionStatus:True 7m31s KubeDB Ops-manager Operator reassign partitions; ConditionStatus:True + Warning is pet set patched; ConditionStatus:True 7m31s KubeDB Ops-manager Operator is pet set patched; ConditionStatus:True + Warning get pod; ConditionStatus:True 7m31s KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning delete pvc; ConditionStatus:True 7m31s KubeDB Ops-manager Operator delete pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:False 7m31s KubeDB Ops-manager Operator get pvc; ConditionStatus:False + Warning get pod; ConditionStatus:True 7m27s KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning delete pvc; ConditionStatus:True 7m27s KubeDB Ops-manager Operator delete pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 7m27s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Normal ScaleDownController 7m22s KubeDB Ops-manager Operator Successfully Scaled Down Controller + Normal UpdatePetSets 7m15s KubeDB Ops-manager Operator successfully reconciled the Kafka with modified node + Warning get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 7m10s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 7m9s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-controller-0 7m5s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-controller-0 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 7m KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 + Warning get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 6m55s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 6m55s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-controller-1 6m50s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-controller-1 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 6m45s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 + Warning get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 6m40s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 6m40s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-broker-0 6m35s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-broker-0 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 5m55s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 + Warning get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 5m50s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 5m50s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-broker-1 5m45s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-broker-1 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 5m40s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 + Normal RestartNodes 5m35s KubeDB Ops-manager Operator Successfully restarted all nodes + Normal Starting 5m35s KubeDB Ops-manager Operator Resuming Kafka database: demo/kafka-prod + Normal Successful 5m34s KubeDB Ops-manager Operator Successfully resumed Kafka database: demo/kafka-prod for KafkaOpsRequest: kfops-hscale-down-topology +``` + +Now, we are going to verify the number of replicas this cluster has from the Kafka object, number of pods the petset have, + +**Broker Replicas** + +```bash +$ kubectl get kafka -n demo kafka-prod -o json | jq '.spec.topology.broker.replicas' +2 + +$ kubectl get petset -n demo kafka-prod-broker -o json | jq '.spec.replicas' +2 +``` + +**Controller Replicas** + +```bash +$ kubectl get kafka -n demo kafka-prod -o json | jq '.spec.topology.controller.replicas' +2 + +$ kubectl get petset -n demo kafka-prod-controller -o json | jq '.spec.replicas' +2 +``` + +Now let's connect to a kafka instance and run a kafka internal command to check the number of replicas for both broker and controller nodes, + +**Broker** + +```bash +$ kubectl exec -it -n demo kafka-prod-broker-0 -- kafka-broker-api-versions.sh --bootstrap-server localhost:9092 --command-config config/clientauth.properties +kafka-prod-broker-0.kafka-prod-pods.demo.svc.cluster.local:9092 (id: 0 rack: null) -> ( + Produce(0): 0 to 9 [usable: 9], + Fetch(1): 0 to 15 [usable: 15], + ListOffsets(2): 0 to 8 [usable: 8], + Metadata(3): 0 to 12 [usable: 12], + LeaderAndIsr(4): UNSUPPORTED, + StopReplica(5): UNSUPPORTED, + UpdateMetadata(6): UNSUPPORTED, + ControlledShutdown(7): UNSUPPORTED, + OffsetCommit(8): 0 to 8 [usable: 8], + OffsetFetch(9): 0 to 8 [usable: 8], + FindCoordinator(10): 0 to 4 [usable: 4], + JoinGroup(11): 0 to 9 [usable: 9], + Heartbeat(12): 0 to 4 [usable: 4], + LeaveGroup(13): 0 to 5 [usable: 5], + SyncGroup(14): 0 to 5 [usable: 5], + DescribeGroups(15): 0 to 5 [usable: 5], + ListGroups(16): 0 to 4 [usable: 4], + SaslHandshake(17): 0 to 1 [usable: 1], + ApiVersions(18): 0 to 3 [usable: 3], + CreateTopics(19): 0 to 7 [usable: 7], + DeleteTopics(20): 0 to 6 [usable: 6], + DeleteRecords(21): 0 to 2 [usable: 2], + InitProducerId(22): 0 to 4 [usable: 4], + OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], + AddPartitionsToTxn(24): 0 to 4 [usable: 4], + AddOffsetsToTxn(25): 0 to 3 [usable: 3], + EndTxn(26): 0 to 3 [usable: 3], + WriteTxnMarkers(27): 0 to 1 [usable: 1], + TxnOffsetCommit(28): 0 to 3 [usable: 3], + DescribeAcls(29): 0 to 3 [usable: 3], + CreateAcls(30): 0 to 3 [usable: 3], + DeleteAcls(31): 0 to 3 [usable: 3], + DescribeConfigs(32): 0 to 4 [usable: 4], + AlterConfigs(33): 0 to 2 [usable: 2], + AlterReplicaLogDirs(34): 0 to 2 [usable: 2], + DescribeLogDirs(35): 0 to 4 [usable: 4], + SaslAuthenticate(36): 0 to 2 [usable: 2], + CreatePartitions(37): 0 to 3 [usable: 3], + CreateDelegationToken(38): 0 to 3 [usable: 3], + RenewDelegationToken(39): 0 to 2 [usable: 2], + ExpireDelegationToken(40): 0 to 2 [usable: 2], + DescribeDelegationToken(41): 0 to 3 [usable: 3], + DeleteGroups(42): 0 to 2 [usable: 2], + ElectLeaders(43): 0 to 2 [usable: 2], + IncrementalAlterConfigs(44): 0 to 1 [usable: 1], + AlterPartitionReassignments(45): 0 [usable: 0], + ListPartitionReassignments(46): 0 [usable: 0], + OffsetDelete(47): 0 [usable: 0], + DescribeClientQuotas(48): 0 to 1 [usable: 1], + AlterClientQuotas(49): 0 to 1 [usable: 1], + DescribeUserScramCredentials(50): 0 [usable: 0], + AlterUserScramCredentials(51): 0 [usable: 0], + DescribeQuorum(55): 0 to 1 [usable: 1], + AlterPartition(56): UNSUPPORTED, + UpdateFeatures(57): 0 to 1 [usable: 1], + Envelope(58): UNSUPPORTED, + DescribeCluster(60): 0 [usable: 0], + DescribeProducers(61): 0 [usable: 0], + UnregisterBroker(64): 0 [usable: 0], + DescribeTransactions(65): 0 [usable: 0], + ListTransactions(66): 0 [usable: 0], + AllocateProducerIds(67): UNSUPPORTED, + ConsumerGroupHeartbeat(68): UNSUPPORTED +) +kafka-prod-broker-1.kafka-prod-pods.demo.svc.cluster.local:9092 (id: 1 rack: null) -> ( + Produce(0): 0 to 9 [usable: 9], + Fetch(1): 0 to 15 [usable: 15], + ListOffsets(2): 0 to 8 [usable: 8], + Metadata(3): 0 to 12 [usable: 12], + LeaderAndIsr(4): UNSUPPORTED, + StopReplica(5): UNSUPPORTED, + UpdateMetadata(6): UNSUPPORTED, + ControlledShutdown(7): UNSUPPORTED, + OffsetCommit(8): 0 to 8 [usable: 8], + OffsetFetch(9): 0 to 8 [usable: 8], + FindCoordinator(10): 0 to 4 [usable: 4], + JoinGroup(11): 0 to 9 [usable: 9], + Heartbeat(12): 0 to 4 [usable: 4], + LeaveGroup(13): 0 to 5 [usable: 5], + SyncGroup(14): 0 to 5 [usable: 5], + DescribeGroups(15): 0 to 5 [usable: 5], + ListGroups(16): 0 to 4 [usable: 4], + SaslHandshake(17): 0 to 1 [usable: 1], + ApiVersions(18): 0 to 3 [usable: 3], + CreateTopics(19): 0 to 7 [usable: 7], + DeleteTopics(20): 0 to 6 [usable: 6], + DeleteRecords(21): 0 to 2 [usable: 2], + InitProducerId(22): 0 to 4 [usable: 4], + OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], + AddPartitionsToTxn(24): 0 to 4 [usable: 4], + AddOffsetsToTxn(25): 0 to 3 [usable: 3], + EndTxn(26): 0 to 3 [usable: 3], + WriteTxnMarkers(27): 0 to 1 [usable: 1], + TxnOffsetCommit(28): 0 to 3 [usable: 3], + DescribeAcls(29): 0 to 3 [usable: 3], + CreateAcls(30): 0 to 3 [usable: 3], + DeleteAcls(31): 0 to 3 [usable: 3], + DescribeConfigs(32): 0 to 4 [usable: 4], + AlterConfigs(33): 0 to 2 [usable: 2], + AlterReplicaLogDirs(34): 0 to 2 [usable: 2], + DescribeLogDirs(35): 0 to 4 [usable: 4], + SaslAuthenticate(36): 0 to 2 [usable: 2], + CreatePartitions(37): 0 to 3 [usable: 3], + CreateDelegationToken(38): 0 to 3 [usable: 3], + RenewDelegationToken(39): 0 to 2 [usable: 2], + ExpireDelegationToken(40): 0 to 2 [usable: 2], + DescribeDelegationToken(41): 0 to 3 [usable: 3], + DeleteGroups(42): 0 to 2 [usable: 2], + ElectLeaders(43): 0 to 2 [usable: 2], + IncrementalAlterConfigs(44): 0 to 1 [usable: 1], + AlterPartitionReassignments(45): 0 [usable: 0], + ListPartitionReassignments(46): 0 [usable: 0], + OffsetDelete(47): 0 [usable: 0], + DescribeClientQuotas(48): 0 to 1 [usable: 1], + AlterClientQuotas(49): 0 to 1 [usable: 1], + DescribeUserScramCredentials(50): 0 [usable: 0], + AlterUserScramCredentials(51): 0 [usable: 0], + DescribeQuorum(55): 0 to 1 [usable: 1], + AlterPartition(56): UNSUPPORTED, + UpdateFeatures(57): 0 to 1 [usable: 1], + Envelope(58): UNSUPPORTED, + DescribeCluster(60): 0 [usable: 0], + DescribeProducers(61): 0 [usable: 0], + UnregisterBroker(64): 0 [usable: 0], + DescribeTransactions(65): 0 [usable: 0], + ListTransactions(66): 0 [usable: 0], + AllocateProducerIds(67): UNSUPPORTED, + ConsumerGroupHeartbeat(68): UNSUPPORTED +) +``` + +**Controller** + +```bash +$ kubectl exec -it -n demo kafka-prod-controller-0 -- kafka-metadata-quorum.sh --bootstrap-server localhost:9092 --command-config config/clientauth.properties describe --status | grep CurrentObservers +CurrentObservers: [0,1] +``` + +From all the above outputs we can see that the replicas of both broker and controller of the topology cluster is `2`. That means we have successfully scaled down the replicas of the Kafka topology cluster. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete kf -n demo kafka-prod +kubectl delete kafkaopsrequest -n demo kfops-hscale-up-topology kfops-hscale-down-topology +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [Kafka object](/docs/guides/kafka/concepts/kafka.md). +- Different Kafka topology clustering modes [here](/docs/guides/kafka/clustering/_index.md). +- Monitor your Kafka with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). + +[//]: # (- Monitor your Kafka with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/kafka/monitoring/using-builtin-prometheus.md).) +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/kafka/scaling/vertical-scaling/_index.md b/docs/guides/kafka/scaling/vertical-scaling/_index.md new file mode 100644 index 0000000000..8eeb4e12f0 --- /dev/null +++ b/docs/guides/kafka/scaling/vertical-scaling/_index.md @@ -0,0 +1,10 @@ +--- +title: Vertical Scaling +menu: + docs_{{ .version }}: + identifier: kf-vertical-scaling + name: Vertical Scaling + parent: kf-scaling + weight: 20 +menu_name: docs_{{ .version }} +--- \ No newline at end of file diff --git a/docs/guides/kafka/scaling/vertical-scaling/combined.md b/docs/guides/kafka/scaling/vertical-scaling/combined.md new file mode 100644 index 0000000000..9c3df81fd8 --- /dev/null +++ b/docs/guides/kafka/scaling/vertical-scaling/combined.md @@ -0,0 +1,308 @@ +--- +title: Vertical Scaling Kafka Combined Cluster +menu: + docs_{{ .version }}: + identifier: kf-vertical-scaling-combined + name: Combined Cluster + parent: kf-vertical-scaling + weight: 30 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Vertical Scale Kafka Combined Cluster + +This guide will show you how to use `KubeDB` Ops-manager operator to update the resources of a Kafka combined cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [Kafka](/docs/guides/kafka/concepts/kafka.md) + - [Combined](/docs/guides/kafka/clustering/combined-cluster/index.md) + - [KafkaOpsRequest](/docs/guides/kafka/concepts/kafkaopsrequest.md) + - [Vertical Scaling Overview](/docs/guides/kafka/scaling/vertical-scaling/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/kafka](/docs/examples/kafka) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Apply Vertical Scaling on Combined Cluster + +Here, we are going to deploy a `Kafka` combined cluster using a supported version by `KubeDB` operator. Then we are going to apply vertical scaling on it. + +### Prepare Kafka Combined Cluster + +Now, we are going to deploy a `Kafka` combined cluster database with version `3.6.1`. + +### Deploy Kafka Combined Cluster + +In this section, we are going to deploy a Kafka combined cluster. Then, in the next section we will update the resources of the database using `KafkaOpsRequest` CRD. Below is the YAML of the `Kafka` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-dev + namespace: demo +spec: + replicas: 2 + version: 3.6.1 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut +``` + +Let's create the `Kafka` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/scaling/kafka-combined.yaml +kafka.kubedb.com/kafka-dev created +``` + +Now, wait until `kafka-dev` has status `Ready`. i.e, + +```bash +$ kubectl get kf -n demo -w +NAME TYPE VERSION STATUS AGE +kafka-dev kubedb.com/v1 3.6.1 Provisioning 0s +kafka-dev kubedb.com/v1 3.6.1 Provisioning 24s +. +. +kafka-dev kubedb.com/v1 3.6.1 Ready 92s +``` + +Let's check the Pod containers resources, + +```bash +$ kubectl get pod -n demo kafka-dev-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "memory": "1Gi" + }, + "requests": { + "cpu": "500m", + "memory": "1Gi" + } +} +``` +This is the default resources of the Kafka combined cluster set by the `KubeDB` operator. + +We are now ready to apply the `KafkaOpsRequest` CR to update the resources of this database. + +### Vertical Scaling + +Here, we are going to update the resources of the combined cluster to meet the desired resources after scaling. + +#### Create KafkaOpsRequest + +In order to update the resources of the database, we have to create a `KafkaOpsRequest` CR with our desired resources. Below is the YAML of the `KafkaOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-vscale-combined + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: kafka-dev + verticalScaling: + node: + resources: + requests: + memory: "1.2Gi" + cpu: "0.6" + limits: + memory: "1.2Gi" + cpu: "0.6" + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing vertical scaling operation on `kafka-dev` cluster. +- `spec.type` specifies that we are performing `VerticalScaling` on kafka. +- `spec.VerticalScaling.node` specifies the desired resources after scaling. + +Let's create the `KafkaOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/scaling/vertical-scaling/kafka-vertical-scaling-combined.yaml +kafkaopsrequest.ops.kubedb.com/kfops-vscale-combined created +``` + +#### Verify Kafka Combined cluster resources updated successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the resources of `Kafka` object and related `PetSets` and `Pods`. + +Let's wait for `KafkaOpsRequest` to be `Successful`. Run the following command to watch `KafkaOpsRequest` CR, + +```bash +$ kubectl get kafkaopsrequest -n demo +NAME TYPE STATUS AGE +kfops-vscale-combined VerticalScaling Successful 3m56s +``` + +We can see from the above output that the `KafkaOpsRequest` has succeeded. If we describe the `KafkaOpsRequest` we will get an overview of the steps that were followed to scale the cluster. + +```bash +$ kubectl describe kafkaopsrequest -n demo kfops-vscale-combined +Name: kfops-vscale-combined +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: KafkaOpsRequest +Metadata: + Creation Timestamp: 2024-08-02T05:59:06Z + Generation: 1 + Resource Version: 336197 + UID: 5fd90feb-eed2-4130-8762-442f2f4d2698 +Spec: + Apply: IfReady + Database Ref: + Name: kafka-dev + Timeout: 5m + Type: VerticalScaling + Vertical Scaling: + Node: + Resources: + Limits: + Cpu: 0.6 + Memory: 1.2Gi + Requests: + Cpu: 0.6 + Memory: 1.2Gi +Status: + Conditions: + Last Transition Time: 2024-08-02T05:59:06Z + Message: Kafka ops-request has started to vertically scaling the kafka nodes + Observed Generation: 1 + Reason: VerticalScaling + Status: True + Type: VerticalScaling + Last Transition Time: 2024-08-02T05:59:09Z + Message: Successfully updated PetSets Resources + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2024-08-02T05:59:14Z + Message: get pod; ConditionStatus:True; PodName:kafka-dev-0 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-dev-0 + Last Transition Time: 2024-08-02T05:59:14Z + Message: evict pod; ConditionStatus:True; PodName:kafka-dev-0 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-dev-0 + Last Transition Time: 2024-08-02T05:59:29Z + Message: check pod running; ConditionStatus:True; PodName:kafka-dev-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-dev-0 + Last Transition Time: 2024-08-02T05:59:34Z + Message: get pod; ConditionStatus:True; PodName:kafka-dev-1 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-dev-1 + Last Transition Time: 2024-08-02T05:59:34Z + Message: evict pod; ConditionStatus:True; PodName:kafka-dev-1 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-dev-1 + Last Transition Time: 2024-08-02T06:00:59Z + Message: check pod running; ConditionStatus:True; PodName:kafka-dev-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-dev-1 + Last Transition Time: 2024-08-02T06:01:04Z + Message: Successfully Restarted Pods With Resources + Observed Generation: 1 + Reason: RestartPods + Status: True + Type: RestartPods + Last Transition Time: 2024-08-02T06:01:04Z + Message: Successfully completed the vertical scaling for kafka + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 2m38s KubeDB Ops-manager Operator Start processing for KafkaOpsRequest: demo/kfops-vscale-combined + Normal Starting 2m38s KubeDB Ops-manager Operator Pausing Kafka databse: demo/kafka-dev + Normal Successful 2m38s KubeDB Ops-manager Operator Successfully paused Kafka database: demo/kafka-dev for KafkaOpsRequest: kfops-vscale-combined + Normal UpdatePetSets 2m35s KubeDB Ops-manager Operator Successfully updated PetSets Resources + Warning get pod; ConditionStatus:True; PodName:kafka-dev-0 2m30s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-dev-0 + Warning evict pod; ConditionStatus:True; PodName:kafka-dev-0 2m30s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-dev-0 + Warning check pod running; ConditionStatus:False; PodName:kafka-dev-0 2m25s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-dev-0 + Warning check pod running; ConditionStatus:True; PodName:kafka-dev-0 2m15s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-dev-0 + Warning get pod; ConditionStatus:True; PodName:kafka-dev-1 2m10s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-dev-1 + Warning evict pod; ConditionStatus:True; PodName:kafka-dev-1 2m10s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-dev-1 + Warning check pod running; ConditionStatus:False; PodName:kafka-dev-1 2m5s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-dev-1 + Warning check pod running; ConditionStatus:True; PodName:kafka-dev-1 45s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-dev-1 + Normal RestartPods 40s KubeDB Ops-manager Operator Successfully Restarted Pods With Resources + Normal Starting 40s KubeDB Ops-manager Operator Resuming Kafka database: demo/kafka-dev + Normal Successful 40s KubeDB Ops-manager Operator Successfully resumed Kafka database: demo/kafka-dev for KafkaOpsRequest: kfops-vscale-combined +``` + +Now, we are going to verify from one of the Pod yaml whether the resources of the combined cluster has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo kafka-dev-1 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "600m", + "memory": "1288490188800m" + }, + "requests": { + "cpu": "600m", + "memory": "1288490188800m" + } +} +``` + +The above output verifies that we have successfully scaled up the resources of the Kafka combined cluster. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo kafka-dev +kubectl delete kafkaopsrequest -n demo kfops-vscale-combined +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [Kafka object](/docs/guides/kafka/concepts/kafka.md). +- Different Kafka topology clustering modes [here](/docs/guides/kafka/clustering/_index.md). +- Monitor your Kafka database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). + +[//]: # (- Monitor your Kafka database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/kafka/monitoring/using-builtin-prometheus.md).) +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/kafka/scaling/vertical-scaling/overview.md b/docs/guides/kafka/scaling/vertical-scaling/overview.md new file mode 100644 index 0000000000..2c95d1867f --- /dev/null +++ b/docs/guides/kafka/scaling/vertical-scaling/overview.md @@ -0,0 +1,54 @@ +--- +title: Kafka Vertical Scaling Overview +menu: + docs_{{ .version }}: + identifier: kf-vertical-scaling-overview + name: Overview + parent: kf-vertical-scaling + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Kafka Vertical Scaling + +This guide will give an overview on how KubeDB Ops-manager operator updates the resources(for example CPU and Memory etc.) of the `Kafka`. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [Kafka](/docs/guides/kafka/concepts/kafka.md) + - [KafkaOpsRequest](/docs/guides/kafka/concepts/kafkaopsrequest.md) + +## How Vertical Scaling Process Works + +The following diagram shows how KubeDB Ops-manager operator updates the resources of the `Kafka`. Open the image in a new tab to see the enlarged version. + +
+  Vertical scaling process of Kafka +
Fig: Vertical scaling process of Kafka
+
+ +The vertical scaling process consists of the following steps: + +1. At first, a user creates a `Kafka` Custom Resource (CR). + +2. `KubeDB` Provisioner operator watches the `Kafka` CR. + +3. When the operator finds a `Kafka` CR, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to update the resources(for example `CPU`, `Memory` etc.) of the `Kafka` cluster, the user creates a `KafkaOpsRequest` CR with desired information. + +5. `KubeDB` Ops-manager operator watches the `KafkaOpsRequest` CR. + +6. When it finds a `KafkaOpsRequest` CR, it halts the `Kafka` object which is referred from the `KafkaOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `Kafka` object during the vertical scaling process. + +7. Then the `KubeDB` Ops-manager operator will update resources of the PetSet Pods to reach desired state. + +8. After the successful update of the resources of the PetSet's replica, the `KubeDB` Ops-manager operator updates the `Kafka` object to reflect the updated state. + +9. After the successful update of the `Kafka` resources, the `KubeDB` Ops-manager operator resumes the `Kafka` object so that the `KubeDB` Provisioner operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on updating resources of Kafka database using `KafkaOpsRequest` CRD. \ No newline at end of file diff --git a/docs/guides/kafka/scaling/vertical-scaling/topology.md b/docs/guides/kafka/scaling/vertical-scaling/topology.md new file mode 100644 index 0000000000..810c083768 --- /dev/null +++ b/docs/guides/kafka/scaling/vertical-scaling/topology.md @@ -0,0 +1,395 @@ +--- +title: Vertical Scaling Kafka Topology Cluster +menu: + docs_{{ .version }}: + identifier: kf-vertical-scaling-topology + name: Topology Cluster + parent: kf-vertical-scaling + weight: 30 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Vertical Scale Kafka Topology Cluster + +This guide will show you how to use `KubeDB` Ops-manager operator to update the resources of a Kafka topology cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [Kafka](/docs/guides/kafka/concepts/kafka.md) + - [Topology](/docs/guides/kafka/clustering/topology-cluster/index.md) + - [KafkaOpsRequest](/docs/guides/kafka/concepts/kafkaopsrequest.md) + - [Vertical Scaling Overview](/docs/guides/kafka/scaling/vertical-scaling/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/kafka](/docs/examples/kafka) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Apply Vertical Scaling on Topology Cluster + +Here, we are going to deploy a `Kafka` topology cluster using a supported version by `KubeDB` operator. Then we are going to apply vertical scaling on it. + +### Prepare Kafka Topology Cluster + +Now, we are going to deploy a `Kafka` topology cluster database with version `3.6.1`. + +### Deploy Kafka Topology Cluster + +In this section, we are going to deploy a Kafka topology cluster. Then, in the next section we will update the resources of the database using `KafkaOpsRequest` CRD. Below is the YAML of the `Kafka` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-prod + namespace: demo +spec: + version: 3.6.1 + topology: + broker: + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + controller: + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut +``` + +Let's create the `Kafka` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/scaling/kafka-topology.yaml +kafka.kubedb.com/kafka-prod created +``` + +Now, wait until `kafka-prod` has status `Ready`. i.e, + +```bash +$ kubectl get kf -n demo -w +NAME TYPE VERSION STATUS AGE +kafka-prod kubedb.com/v1 3.6.1 Provisioning 0s +kafka-prod kubedb.com/v1 3.6.1 Provisioning 24s +. +. +kafka-prod kubedb.com/v1 3.6.1 Ready 92s +``` + +Let's check the Pod containers resources for both `broker` and `controller` of the Kafka topology cluster. Run the following command to get the resources of the `broker` and `controller` containers of the Kafka topology cluster + +```bash +$ kubectl get pod -n demo kafka-prod-broker-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "memory": "1Gi" + }, + "requests": { + "cpu": "500m", + "memory": "1Gi" + } +} +``` + +```bash +$ kubectl get pod -n demo kafka-prod-controller-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "memory": "1Gi" + }, + "requests": { + "cpu": "500m", + "memory": "1Gi" + } +} +``` +This is the default resources of the Kafka topology cluster set by the `KubeDB` operator. + +We are now ready to apply the `KafkaOpsRequest` CR to update the resources of this database. + +### Vertical Scaling + +Here, we are going to update the resources of the topology cluster to meet the desired resources after scaling. + +#### Create KafkaOpsRequest + +In order to update the resources of the database, we have to create a `KafkaOpsRequest` CR with our desired resources. Below is the YAML of the `KafkaOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-vscale-topology + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: kafka-prod + verticalScaling: + broker: + resources: + requests: + memory: "1.2Gi" + cpu: "0.6" + limits: + memory: "1.2Gi" + cpu: "0.6" + controller: + resources: + requests: + memory: "1.1Gi" + cpu: "0.6" + limits: + memory: "1.1Gi" + cpu: "0.6" + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing vertical scaling operation on `kafka-prod` cluster. +- `spec.type` specifies that we are performing `VerticalScaling` on kafka. +- `spec.VerticalScaling.node` specifies the desired resources after scaling. + +Let's create the `KafkaOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/scaling/vertical-scaling/kafka-vertical-scaling-topology.yaml +kafkaopsrequest.ops.kubedb.com/kfops-vscale-topology created +``` + +#### Verify Kafka Topology cluster resources updated successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the resources of `Kafka` object and related `PetSets` and `Pods`. + +Let's wait for `KafkaOpsRequest` to be `Successful`. Run the following command to watch `KafkaOpsRequest` CR, + +```bash +$ kubectl get kafkaopsrequest -n demo +NAME TYPE STATUS AGE +kfops-vscale-topology VerticalScaling Successful 3m56s +``` + +We can see from the above output that the `KafkaOpsRequest` has succeeded. If we describe the `KafkaOpsRequest` we will get an overview of the steps that were followed to scale the cluster. + +```bash +$ kubectl describe kafkaopsrequest -n demo kfops-vscale-topology +Name: kfops-vscale-topology +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: KafkaOpsRequest +Metadata: + Creation Timestamp: 2024-08-02T06:09:46Z + Generation: 1 + Resource Version: 337300 + UID: ca298c0a-e08d-4c78-acbc-40eb5e96532d +Spec: + Apply: IfReady + Database Ref: + Name: kafka-prod + Timeout: 5m + Type: VerticalScaling + Vertical Scaling: + Broker: + Resources: + Limits: + Cpu: 0.6 + Memory: 1.2Gi + Requests: + Cpu: 0.6 + Memory: 1.2Gi + Controller: + Resources: + Limits: + Cpu: 0.6 + Memory: 1.1Gi + Requests: + Cpu: 0.6 + Memory: 1.1Gi +Status: + Conditions: + Last Transition Time: 2024-08-02T06:09:46Z + Message: Kafka ops-request has started to vertically scaling the kafka nodes + Observed Generation: 1 + Reason: VerticalScaling + Status: True + Type: VerticalScaling + Last Transition Time: 2024-08-02T06:09:50Z + Message: Successfully updated PetSets Resources + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2024-08-02T06:09:55Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-broker-0 + Last Transition Time: 2024-08-02T06:09:55Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-broker-0 + Last Transition Time: 2024-08-02T06:10:00Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-broker-0 + Last Transition Time: 2024-08-02T06:10:05Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-broker-1 + Last Transition Time: 2024-08-02T06:10:05Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-broker-1 + Last Transition Time: 2024-08-02T06:10:15Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-broker-1 + Last Transition Time: 2024-08-02T06:10:20Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-controller-0 + Last Transition Time: 2024-08-02T06:10:20Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-controller-0 + Last Transition Time: 2024-08-02T06:10:35Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-controller-0 + Last Transition Time: 2024-08-02T06:10:40Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-controller-1 + Last Transition Time: 2024-08-02T06:10:40Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-controller-1 + Last Transition Time: 2024-08-02T06:10:55Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-controller-1 + Last Transition Time: 2024-08-02T06:11:00Z + Message: Successfully Restarted Pods With Resources + Observed Generation: 1 + Reason: RestartPods + Status: True + Type: RestartPods + Last Transition Time: 2024-08-02T06:11:00Z + Message: Successfully completed the vertical scaling for kafka + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 3m32s KubeDB Ops-manager Operator Start processing for KafkaOpsRequest: demo/kfops-vscale-topology + Normal Starting 3m32s KubeDB Ops-manager Operator Pausing Kafka databse: demo/kafka-prod + Normal Successful 3m32s KubeDB Ops-manager Operator Successfully paused Kafka database: demo/kafka-prod for KafkaOpsRequest: kfops-vscale-topology + Normal UpdatePetSets 3m28s KubeDB Ops-manager Operator Successfully updated PetSets Resources + Warning get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 3m23s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 3m23s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 3m18s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 + Warning get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 3m13s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 3m13s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-broker-1 3m8s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-broker-1 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 3m3s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 + Warning get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 2m58s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 2m58s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-controller-0 2m53s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-controller-0 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 2m43s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 + Warning get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 2m38s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 2m38s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-controller-1 2m33s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-controller-1 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 2m23s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 + Normal RestartPods 2m18s KubeDB Ops-manager Operator Successfully Restarted Pods With Resources + Normal Starting 2m18s KubeDB Ops-manager Operator Resuming Kafka database: demo/kafka-prod + Normal Successful 2m18s KubeDB Ops-manager Operator Successfully resumed Kafka database: demo/kafka-prod for KafkaOpsRequest: kfops-vscale-topology +``` +Now, we are going to verify from one of the Pod yaml whether the resources of the topology cluster has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo kafka-prod-broker-1 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "600m", + "memory": "1288490188800m" + }, + "requests": { + "cpu": "600m", + "memory": "1288490188800m" + } +} +$ kubectl get pod -n demo kafka-prod-controller-1 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "600m", + "memory": "1181116006400m" + }, + "requests": { + "cpu": "600m", + "memory": "1181116006400m" + } +} +``` + +The above output verifies that we have successfully scaled up the resources of the Kafka topology cluster. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete kf -n demo kafka-prod +kubectl delete kafkaopsrequest -n demo kfops-vscale-topology +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [Kafka object](/docs/guides/kafka/concepts/kafka.md). +- Different Kafka topology clustering modes [here](/docs/guides/kafka/clustering/_index.md). +- Monitor your Kafka database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). + +[//]: # (- Monitor your Kafka database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/kafka/monitoring/using-builtin-prometheus.md).) +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/kafka/schemaregistry/_index.md b/docs/guides/kafka/schemaregistry/_index.md new file mode 100644 index 0000000000..31f884753c --- /dev/null +++ b/docs/guides/kafka/schemaregistry/_index.md @@ -0,0 +1,10 @@ +--- +title: Schema Registry +menu: + docs_{{ .version }}: + identifier: kf-schema-registry-guides + name: SchemaRegistry + parent: kf-kafka-guides + weight: 25 +menu_name: docs_{{ .version }} +--- diff --git a/docs/guides/kafka/schemaregistry/overview.md b/docs/guides/kafka/schemaregistry/overview.md new file mode 100644 index 0000000000..017d78a9ba --- /dev/null +++ b/docs/guides/kafka/schemaregistry/overview.md @@ -0,0 +1,349 @@ +--- +title: Schema Registry Overview +menu: + docs_{{ .version }}: + identifier: kf-schema-registry-guides-overview + name: Overview + parent: kf-schema-registry-guides + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# SchemaRegistry QuickStart + +This tutorial will show you how to use KubeDB to run a [Schema Registry](https://www.apicur.io/registry/). + +

+  lifecycle +

+ +## Before You Begin + +At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +Now, install the KubeDB operator in your cluster following the steps [here](/docs/setup/install/_index.md). + +To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create namespace demo +namespace/demo created + +$ kubectl get namespace +NAME STATUS AGE +demo Active 9s +``` + +> Note: YAML files used in this tutorial are stored in [examples/kafka/schemaregistry/](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/kafka/schemaregistry) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +> We have designed this tutorial to demonstrate a production setup of KubeDB managed Schema Registry. If you just want to try out KubeDB, you can bypass some safety features following the tips [here](/docs/guides/kafka/schemaregistry/overview.md#tips-for-testing). + +## Find Available SchemaRegistry Versions + +When you install the KubeDB operator, it registers a CRD named [SchemaRegistryVersion](/docs/guides/kafka/concepts/schemaregistryversion.md). The installation process comes with a set of tested SchemaRegistryVersion objects. Let's check available SchemaRegistryVersions by, + +```bash +$ kubectl get ksrversion + +NAME VERSION DB_IMAGE DEPRECATED AGE +NAME VERSION DISTRIBUTION REGISTRY_IMAGE DEPRECATED AGE +2.5.11.final 2.5.11 Apicurio apicurio/apicurio-registry-kafkasql:2.5.11.Final 3d +3.15.0 3.15.0 Aiven ghcr.io/aiven-open/karapace:3.15.0 3d +``` + +> **Note**: Currently Schema Registry is supported only for Apicurio distribution. Use version with distribution `Apicurio` to create Schema Registry. + +Notice the `DEPRECATED` column. Here, `true` means that this SchemaRegistryVersion is deprecated for the current KubeDB version. KubeDB will not work for deprecated KafkaVersion. You can also use the short from `ksrversion` to check available SchemaRegistryVersion. + +In this tutorial, we will use `2.5.11.final` SchemaRegistryVersion CR to create a Kafka Schema Registry. + +## Create a Kafka Schema Registry + +The KubeDB operator implements a SchemaRegistry CRD to define the specification of SchemaRegistry. + +The SchemaRegistry instance used for this tutorial: + +```yaml +apiVersion: kafka.kubedb.com/v1alpha1 +kind: SchemaRegistry +metadata: + name: schemaregistry-quickstart + namespace: demo +spec: + version: 2.5.11.final + replicas: 2 + kafkaRef: + name: kafka-quickstart + namespace: demo + deletionPolicy: WipeOut +``` + +Here, + +- `spec.version` - is the name of the SchemaRegistryVersion CR. Here, a SchemaRegistry of version `2.5.11.final` will be created. +- `spec.replicas` - specifies the number of schema registry instances to run. Here, the SchemaRegistry will run with 2 replicas. +- `spec.kafkaRef` specifies the Kafka instance that the SchemaRegistry will store its schema. Here, the SchemaRegistry will store schema to the Kafka instance named `kafka-quickstart` in the `demo` namespace. It is an appbinding reference of the Kafka instance. +- `spec.deletionPolicy` specifies what KubeDB should do when a user try to delete SchemaRegistry CR. Deletion policy `WipeOut` will delete all the instances, secret when the SchemaRegistry CR is deleted. + +> **Note**: If `spec.kafkaRef` is not provided, the SchemaRegistry will run `inMmemory` mode. SchemaRegistry will store schema to its memory. + +Before create SchemaRegistry, you have to deploy a `Kafka` cluster first. To deploy kafka cluster, follow the [Kafka Quickstart](/docs/guides/kafka/quickstart/kafka/index.md) guide. Let's assume `kafka-quickstart` is already deployed using KubeDB. +Let's create the SchemaRegistry CR that is shown above: + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/schemaregistry/schemaregistry-apicurio.yaml +schemaregistry.kafka.kubedb.com/schemaregistry-quickstart created +``` + +The SchemaRegistry's `STATUS` will go from `Provisioning` to `Ready` state within few minutes. Once the `STATUS` is `Ready`, you are ready to use the SchemaRegistry. + +```bash +$ kubectl get schemaregistry -n demo -w +NAME TYPE VERSION STATUS AGE +schemaregistry-quickstart kafka.kubedb.com/v1alpha1 3.6.1 Provisioning 2s +schemaregistry-quickstart kafka.kubedb.com/v1alpha1 3.6.1 Provisioning 4s +. +. +schemaregistry-quickstart kafka.kubedb.com/v1alpha1 3.6.1 Ready 112s +``` + +Describe the `SchemaRegistry` object to observe the progress if something goes wrong or the status is not changing for a long period of time: + +```bash +$ kubectl describe schemaregistry -n demo schemaregistry-quickstart +Name: schemaregistry-quickstart +Namespace: demo +Labels: +Annotations: +API Version: kafka.kubedb.com/v1alpha1 +Kind: SchemaRegistry +Metadata: + Creation Timestamp: 2024-09-02T05:29:55Z + Finalizers: + kafka.kubedb.com/finalizer + Generation: 1 + Resource Version: 174971 + UID: 5a5f0c8f-778b-471f-973a-683004b26c78 +Spec: + Deletion Policy: WipeOut + Health Checker: + Failure Threshold: 3 + Period Seconds: 20 + Timeout Seconds: 10 + Kafka Ref: + Name: kafka-quickstart + Namespace: demo + Pod Template: + Controller: + Metadata: + Spec: + Containers: + Name: schema-registry + Resources: + Limits: + Memory: 1Gi + Requests: + Cpu: 500m + Memory: 1Gi + Security Context: + Allow Privilege Escalation: false + Capabilities: + Drop: + ALL + Run As Non Root: true + Run As User: 1001 + Seccomp Profile: + Type: RuntimeDefault + Pod Placement Policy: + Name: default + Security Context: + Fs Group: 1001 + Replicas: 2 + Version: 2.5.11.final +Status: + Conditions: + Last Transition Time: 2024-09-02T05:29:55Z + Message: The KubeDB operator has started the provisioning of SchemaRegistry: demo/schemaregistry-quickstart + Observed Generation: 1 + Reason: SchemaRegistryProvisioningStartedSuccessfully + Status: True + Type: ProvisioningStarted + Last Transition Time: 2024-09-02T05:30:47Z + Message: All desired replicas are ready. + Observed Generation: 1 + Reason: AllReplicasReady + Status: True + Type: ReplicaReady + Last Transition Time: 2024-09-02T05:31:09Z + Message: The SchemaRegistry: demo/schemaregistry-quickstart is accepting client requests + Observed Generation: 1 + Reason: DatabaseAcceptingConnectionRequest + Status: True + Type: AcceptingConnection + Last Transition Time: 2024-09-02T05:31:09Z + Message: The SchemaRegistry: demo/schemaregistry-quickstart is ready. + Observed Generation: 1 + Reason: ReadinessCheckSucceeded + Status: True + Type: Ready + Last Transition Time: 2024-09-02T05:31:11Z + Message: The SchemaRegistry: demo/schemaregistry-quickstart is successfully provisioned. + Observed Generation: 1 + Reason: DatabaseSuccessfullyProvisioned + Status: True + Type: Provisioned + Phase: Ready +Events: +``` + +### KubeDB Operator Generated Resources + +On deployment of a SchemaRegistry CR, the operator creates the following resources: + +```bash +$ kubectl get all,secret,petset -n demo -l 'app.kubernetes.io/instance=schemaregistry-quickstart' +NAME READY STATUS RESTARTS AGE +pod/schemaregistry-quickstart-0 1/1 Running 0 4m14s +pod/schemaregistry-quickstart-1 1/1 Running 0 3m28s + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/schemaregistry-quickstart ClusterIP 10.96.187.98 8080/TCP 4m17s +service/schemaregistry-quickstart-pods ClusterIP None 8080/TCP 4m17s + +NAME TYPE DATA AGE +secret/schemaregistry-quickstart-config Opaque 1 4m17s + +NAME AGE +petset.apps.k8s.appscode.com/schemaregistry-quickstart 4m14s +``` + +- `PetSet` - a PetSet named after the SchemaRegistry instance. +- `Services` - For a SchemaRegistry instance headless service is created with name `{SchemaRegistry-name}-{pods}` and a primary service created with name `{SchemaRegistry-name}`. +- `Secrets` - default configuration secrets are generated for SchemaRegistry. + - `{SchemaRegistry-Name}-config` - the default configuration secret created by the operator. + +### Accessing Schema Registry(Rest API) + +You can access the Schema Registry using the REST API. The Schema Registry REST API is available at port `8080` of the Schema Registry service. + +To access the Schema Registry REST API, you can use `kubectl port-forward` command to forward the port to your local machine. + +```bash +$ kubectl port-forward service/schemaregistry-quickstart 8080:8080 -n demo +Forwarding from 127.0.0.1:8080 -> 8080 +Forwarding from [::1]:8080 -> 8080 +``` + +In another terminal, you can use `curl` to get, create or update schema using the Schema Registry REST API. + +Create a new schema with the following command: + +```bash +$ curl -X POST -H "Content-Type: application/json; artifactType=AVRO" -H "X-Registry-ArtifactId: share-price" \ + --data '{"type":"record","name":"price","namespace":"com.example", \ + "fields":[{"name":"symbol","type":"string"},{"name":"price","type":"string"}]}' \ + localhost:8080/apis/registry/v2/groups/quickstart-group/artifacts | jq + +{ + "createdBy": "", + "createdOn": "2024-09-02T05:53:03+0000", + "modifiedBy": "", + "modifiedOn": "2024-09-02T05:53:03+0000", + "id": "share-price", + "version": "1", + "type": "AVRO", + "globalId": 2, + "state": "ENABLED", + "groupId": "quickstart-group", + "contentId": 2, + "references": [] +} +``` + +Get all the groups: + +```bash +$ curl localhost:8080/apis/registry/v2/groups | jq . +{ + "groups": [ + { + "id": "quickstart-group", + "createdOn": "2024-09-02T05:49:33+0000", + "createdBy": "", + "modifiedBy": "" + } + ], + "count": 1 +} +``` + +Get all the artifacts in the group `quickstart-group`: + +```bash +$ curl localhost:8080/apis/registry/v2/groups/quickstart-group/artifacts | jq +{ + "artifacts": [ + { + "id": "share-price", + "createdOn": "2024-09-02T05:53:03+0000", + "createdBy": "", + "type": "AVRO", + "state": "ENABLED", + "modifiedOn": "2024-09-02T05:53:03+0000", + "modifiedBy": "", + "groupId": "quickstart-group" + } + ], + "count": 1 +} +``` + +> **Note**: You can also use Schema Registry with Confluent 7 compatible REST APIs. To use confluent compatible REST APIs, you have to add `apis/ccompat/v7` after url address.(e.g. `localhost:8081/subjects` -> `localhost:8080/apis/ccompat/v7/subjects`) + +### Accessing Schema Registry(UI) + +You can also use the Schema Registry UI to interact with the Schema Registry. The Schema Registry UI is available at port `8080` of the Schema Registry service. + +Use `http://localhost:8080/ui/artifacts` to access the Schema Registry UI. + +You will see the following screen: + +

+ +From the UI, you can create, update, delete, and view the schema. Also add compatibility level, view the schema history, etc. + + +## Cleaning up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +$ kubectl patch -n demo schemaregistry schemaregistry-quickstart -p '{"spec":{"deletionPolicy":"WipeOut"}}' --type="merge" +schemaregistry.kafka.kubedb.com/schemaregistry-quickstart patched + +$ kubectl delete ksr schemaregistry-quickstart -n demo +schemaregistry.kafka.kubedb.com "schemaregistry-quickstart" deleted + +$ kubectl delete kafka kafka-quickstart -n demo +kafka.kubedb.com "kafka-quickstart" deleted + +$ kubectl delete namespace demo +namespace "demo" deleted +``` + +## Tips for Testing + +If you are just testing some basic functionalities, you might want to avoid additional hassles due to some safety features that are great for the production environment. You can follow these tips to avoid them. + +1 **Use `deletionPolicy: Delete`**. It is nice to be able to resume the cluster from the previous one. So, we preserve auth `Secrets`. If you don't want to resume the cluster, you can just use `spec.deletionPolicy: WipeOut`. It will clean up every resource that was created with the SchemaRegistry CR. For more details, please visit [here](/docs/guides/kafka/concepts/schemaregistry.md#specdeletionpolicy). + +## Next Steps + +- [Quickstart Kafka](/docs/guides/kafka/quickstart/kafka/index.md) with KubeDB Operator. +- [Quickstart ConnectCluster](/docs/guides/kafka/connectcluster/overview.md) with KubeDB Operator. +- Use [kubedb cli](/docs/guides/kafka/cli/cli.md) to manage databases like kubectl for Kubernetes. +- Detail concepts of [ConnectCluster object](/docs/guides/kafka/concepts/connectcluster.md). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/kafka/tls/combined.md b/docs/guides/kafka/tls/combined.md new file mode 100644 index 0000000000..529392ee91 --- /dev/null +++ b/docs/guides/kafka/tls/combined.md @@ -0,0 +1,250 @@ +--- +title: Kafka Combined TLS/SSL Encryption +menu: + docs_{{ .version }}: + identifier: kf-tls-combined + name: Combined Cluster + parent: kf-tls + weight: 30 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Run Kafka with TLS/SSL (Transport Encryption) + +KubeDB supports providing TLS/SSL encryption for Kafka. This tutorial will show you how to use KubeDB to run a Kafka cluster with TLS/SSL encryption. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install [`cert-manger`](https://cert-manager.io/docs/installation/) v1.0.0 or later to your cluster to manage your SSL/TLS certificates. + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/kafka](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/kafka) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +KubeDB uses following crd fields to enable SSL/TLS encryption in Kafka. + +- `spec:` + - `enableSSL` + - `tls:` + - `issuerRef` + - `certificate` + +Read about the fields in details in [kafka concept](/docs/guides/kafka/concepts/kafka.md), + +`tls` is applicable for all types of Kafka (i.e., `combined` and `topology`). + +Users must specify the `tls.issuerRef` field. KubeDB uses the `issuer` or `clusterIssuer` referenced in the `tls.issuerRef` field, and the certificate specs provided in `tls.certificate` to generate certificate secrets. These certificate secrets are then used to generate required certificates including `ca.crt`, `tls.crt`, `tls.key`, `keystore.jks` and `truststore.jks`. + +## Create Issuer/ ClusterIssuer + +We are going to create an example `Issuer` that will be used throughout the duration of this tutorial to enable SSL/TLS in Kafka. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. + +- Start off by generating you ca certificates using openssl. + +```bash +openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=kafka/O=kubedb" +``` + +- Now create a ca-secret using the certificate files you have just generated. + +```bash +kubectl create secret tls kafka-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +``` + +Now, create an `Issuer` using the `ca-secret` you have just created. The `YAML` file looks like this: + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: kafka-ca-issuer + namespace: demo +spec: + ca: + secretName: kafka-ca +``` + +Apply the `YAML` file: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/tls/kf-issuer.yaml +issuer.cert-manager.io/kafka-ca-issuer created +``` + +## TLS/SSL encryption in Kafka Combined Cluster + +```yaml +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-dev-tls + namespace: demo +spec: + version: 3.6.1 + enableSSL: true + tls: + issuerRef: + apiGroup: "cert-manager.io" + kind: Issuer + name: kafka-ca-issuer + replicas: 3 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut +``` + +### Deploy Kafka Combined Cluster + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/tls/kafka-dev-tls.yaml +kafka.kubedb.com/kafka-dev-tls created +``` + +Now, wait until `kafka-dev-tls created` has status `Ready`. i.e, + +```bash +$ watch kubectl get mg -n demo + +Every 2.0s: kubectl get kafka -n demo aadee: Fri Sep 6 12:34:51 2024 +NAME TYPE VERSION STATUS AGE +kafka-dev-tls kubedb.com/v1 3.6.1 Provisioning 0s +kafka-dev-tls kubedb.com/v1 3.6.1 Provisioning 12s +. +. +kafka-dev-tls kubedb.com/v1 3.6.1 Ready 77s +``` + +### Verify TLS/SSL in Kafka Combined Cluster + +```bash +$ kubectl describe secret -n demo kafka-dev-tls-client-cert + +Name: kafka-dev-tls-client-cert +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=kafka-dev-tls + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=kafkas.kubedb.com + controller.cert-manager.io/fao=true +Annotations: cert-manager.io/alt-names: + *.kafka-dev-tls-pods.demo.svc.cluster.local,kafka-dev-tls-pods,kafka-dev-tls-pods.demo.svc,kafka-dev-tls-pods.demo.svc.cluster.local,local... + cert-manager.io/certificate-name: kafka-dev-tls-client-cert + cert-manager.io/common-name: kafka-dev-tls-pods.demo.svc + cert-manager.io/ip-sans: 127.0.0.1 + cert-manager.io/issuer-group: cert-manager.io + cert-manager.io/issuer-kind: Issuer + cert-manager.io/issuer-name: kafka-ca-issuer + cert-manager.io/uri-sans: + +Type: kubernetes.io/tls + +Data +==== +truststore.jks: 891 bytes +ca.crt: 1184 bytes +keystore.jks: 3245 bytes +tls.crt: 1452 bytes +tls.key: 1704 bytes +``` + +Now, Let's exec into a kafka broker pod and verify the configuration that the TLS is enabled. + +```bash +$ kubectl exec -it -n demo kafka-dev-tls-0 -- kafka-configs.sh --bootstrap-server localhost:9092 --command-config /opt/kafka/config/clientauth.properties --describe --entity-type brokers --all | grep 'ssl.keystore' + ssl.keystore.certificate.chain=null sensitive=true synonyms={} + ssl.keystore.key=null sensitive=true synonyms={} + ssl.keystore.location=/var/private/ssl/server.keystore.jks sensitive=false synonyms={STATIC_BROKER_CONFIG:ssl.keystore.location=/var/private/ssl/server.keystore.jks} + ssl.keystore.password=null sensitive=true synonyms={STATIC_BROKER_CONFIG:ssl.keystore.password=null} + ssl.keystore.type=JKS sensitive=false synonyms={DEFAULT_CONFIG:ssl.keystore.type=JKS} + zookeeper.ssl.keystore.location=null sensitive=false synonyms={} + zookeeper.ssl.keystore.password=null sensitive=true synonyms={} + zookeeper.ssl.keystore.type=null sensitive=false synonyms={} + ssl.keystore.certificate.chain=null sensitive=true synonyms={} + ssl.keystore.key=null sensitive=true synonyms={} + ssl.keystore.location=/var/private/ssl/server.keystore.jks sensitive=false synonyms={STATIC_BROKER_CONFIG:ssl.keystore.location=/var/private/ssl/server.keystore.jks} + ssl.keystore.password=null sensitive=true synonyms={STATIC_BROKER_CONFIG:ssl.keystore.password=null} + ssl.keystore.type=JKS sensitive=false synonyms={DEFAULT_CONFIG:ssl.keystore.type=JKS} + zookeeper.ssl.keystore.location=null sensitive=false synonyms={} + zookeeper.ssl.keystore.password=null sensitive=true synonyms={} + zookeeper.ssl.keystore.type=null sensitive=false synonyms={} + ssl.keystore.certificate.chain=null sensitive=true synonyms={} + ssl.keystore.key=null sensitive=true synonyms={} + ssl.keystore.location=/var/private/ssl/server.keystore.jks sensitive=false synonyms={STATIC_BROKER_CONFIG:ssl.keystore.location=/var/private/ssl/server.keystore.jks} + ssl.keystore.password=null sensitive=true synonyms={STATIC_BROKER_CONFIG:ssl.keystore.password=null} + ssl.keystore.type=JKS sensitive=false synonyms={DEFAULT_CONFIG:ssl.keystore.type=JKS} + zookeeper.ssl.keystore.location=null sensitive=false synonyms={} + zookeeper.ssl.keystore.password=null sensitive=true synonyms={} + zookeeper.ssl.keystore.type=null sensitive=false synonyms={} +``` + +We can see from the above output that, keystore location is `/var/private/ssl/server.keystore.jks` which means that TLS is enabled. + +You will find a file named `clientauth.properties` in the config directory. This file is generated by the operator which contains necessary authentication/authorization/certificate configurations that are required during connect to the Kafka cluster. + +```bash +root@kafka-dev-tls-0:~# cat config/clientauth.properties +sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="*************"; +security.protocol=SASL_SSL +sasl.mechanism=PLAIN +ssl.truststore.location=/var/private/ssl/server.truststore.jks +ssl.truststore.password=*********** +``` + +Now, let's exec into the kafka pod and connect using this configuration to verify the TLS is enabled. + +```bash +$ kubectl exec -it -n demo kafka-dev-tls-0 -- bash +kafka@kafka-dev-tls-0:~$ kafka-metadata-quorum.sh --command-config config/clientauth.properties --bootstrap-server localhost:9092 describe --status +ClusterId: 11ef-921c-f2a07f85765w +LeaderId: 1 +LeaderEpoch: 17 +HighWatermark: 1292 +MaxFollowerLag: 0 +MaxFollowerLagTimeMs: 16 +CurrentVoters: [0,1,2] +CurrentObservers: [] +``` + +From the above output, we can see that we are able to connect to the Kafka cluster using the TLS configuration. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete kafka -n demo kafka-dev-tls +kubectl delete issuer -n demo kafka-ca-issuer +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [Kafka object](/docs/guides/kafka/concepts/kafka.md). +- Monitor your Kafka cluster with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). +- Monitor your Kafka cluster with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/kafka/monitoring/using-builtin-prometheus.md). +- Use [kubedb cli](/docs/guides/kafka/cli/cli.md) to manage databases like kubectl for Kubernetes. +- Detail concepts of [Kafka object](/docs/guides/kafka/concepts/kafka.md). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/kafka/tls/connectcluster.md b/docs/guides/kafka/tls/connectcluster.md new file mode 100644 index 0000000000..64be177ebc --- /dev/null +++ b/docs/guides/kafka/tls/connectcluster.md @@ -0,0 +1,224 @@ +--- +title: Kafka ConnectCluster TLS/SSL Encryption +menu: + docs_{{ .version }}: + identifier: kf-tls-connectcluster + name: ConnectCluster + parent: kf-tls + weight: 30 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Run Kafka ConnectCluster with TLS/SSL (Transport Encryption) + +KubeDB supports providing TLS/SSL encryption for Kafka ConnectCluster. This tutorial will show you how to use KubeDB to run a Kafka ConnectCluster with TLS/SSL encryption. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install [`cert-manger`](https://cert-manager.io/docs/installation/) v1.0.0 or later to your cluster to manage your SSL/TLS certificates. + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/kafka](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/kafka) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +KubeDB uses following crd fields to enable SSL/TLS encryption in Kafka. + +- `spec:` + - `enableSSL` + - `tls:` + - `issuerRef` + - `certificate` + +Read about the fields in details in [kafka concept](/docs/guides/kafka/concepts/kafka.md), + +`tls` is applicable for all types of Kafka (i.e., `combined` and `topology`). + +Users must specify the `tls.issuerRef` field. KubeDB uses the `issuer` or `clusterIssuer` referenced in the `tls.issuerRef` field, and the certificate specs provided in `tls.certificate` to generate certificate secrets. These certificate secrets are then used to generate required certificates including `ca.crt`, `tls.crt`, `tls.key`, `keystore.jks` and `truststore.jks`. + +## Create Issuer/ ClusterIssuer + +We are going to create an example `Issuer` that will be used throughout the duration of this tutorial to enable SSL/TLS in Kafka. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. + +- Start off by generating you ca certificates using openssl. + +```bash +openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=connectcluster/O=kubedb" +``` + +- Now create a ca-secret using the certificate files you have just generated. + +```bash +kubectl create secret tls connectcluster-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +``` + +Now, create an `Issuer` using the `ca-secret` you have just created. The `YAML` file looks like this: + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: connectcluster-ca-issuer + namespace: demo +spec: + ca: + secretName: connectcluster-ca +``` + +Apply the `YAML` file: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/tls/connectcluster-issuer.yaml +issuer.cert-manager.io/connectcluster-ca-issuer created +``` + +## TLS/SSL encryption in Kafka Topology Cluster + +> **Note:** Before creating Kafka ConnectCluster, make sure you have a Kafka cluster with/without TLS/SSL enabled. If you don't have a Kafka cluster, you can follow the steps [here](/docs/guides/kafka/tls/topology.md). + +```yaml +apiVersion: kafka.kubedb.com/v1alpha1 +kind: ConnectCluster +metadata: + name: connectcluster-distributed + namespace: demo +spec: + version: 3.6.1 + enableSSL: true + tls: + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: connectcluster-ca-issuer + replicas: 3 + connectorPlugins: + - postgres-2.4.2.final + - jdbc-2.6.1.final + kafkaRef: + name: kafka-prod-tls + namespace: demo + deletionPolicy: WipeOut +``` + +Here, +- `spec.enableSSL` is set to `true` to enable TLS/SSL encryption. +- `spec.tls.issuerRef` refers to the `Issuer` that we have created in the previous step. +- `spec.kafkaRef` refers to the Kafka cluster that we have created from [here](/docs/guides/kafka/tls/topology.md). + +### Deploy Kafka ConnectCluster with TLS/SSL + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/tls/connectcluster-tls.yaml +connectcluster.kafka.kubedb.com/connectcluster-tls created +``` + +Now, wait until `connectcluster-tls created` has status `Ready`. i.e, + +```bash +$ watch kubectl get connectcluster -n demo + +Every 2.0s: kubectl get connectcluster -n demo aadee: Fri Sep 6 14:59:32 2024 + +NAME TYPE VERSION STATUS AGE +connectcluster-tls kafka.kubedb.com/v1alpha1 3.6.1 Provisioning 0s +connectcluster-tls kafka.kubedb.com/v1alpha1 3.6.1 Provisioning 34s +. +. +connectcluster-tls kafka.kubedb.com/v1alpha1 3.6.1 Ready 2m +``` + +### Verify TLS/SSL in Kafka ConnectCluster + +```bash +$ kubectl describe secret -n demo connectcluster-tls-client-connect-cert + +Name: connectcluster-tls-client-connect-cert +Namespace: demo +Labels: app.kubernetes.io/component=kafka + app.kubernetes.io/instance=connectcluster-tls + app.kubernetes.io/managed-by=kafka.kubedb.com + app.kubernetes.io/name=connectclusters.kafka.kubedb.com + controller.cert-manager.io/fao=true +Annotations: cert-manager.io/alt-names: + *.connectcluster-tls-pods.demo.svc,*.connectcluster-tls-pods.demo.svc.cluster.local,connectcluster-tls,connectcluster-tls-pods.demo.svc,co... + cert-manager.io/certificate-name: connectcluster-tls-client-connect-cert + cert-manager.io/common-name: connectcluster-tls-pods.demo.svc + cert-manager.io/ip-sans: 127.0.0.1 + cert-manager.io/issuer-group: cert-manager.io + cert-manager.io/issuer-kind: Issuer + cert-manager.io/issuer-name: connectcluster-ca-issuer + cert-manager.io/uri-sans: + +Type: kubernetes.io/tls + +Data +==== +ca.crt: 1184 bytes +tls.crt: 1566 bytes +tls.key: 1704 bytes +``` + +Now, Let's exec into a ConnectCluster pod and verify the configuration that the TLS is enabled. + +```bash +$ kubectl exec -it connectcluster-tls-0 -n demo -- bash +kafka@connectcluster-tls-0:~$ curl -u "$CONNECT_CLUSTER_USER:$CONNECT_CLUSTER_PASSWORD" http://localhost:8083 +curl: (1) Received HTTP/0.9 when not allowed +``` + +From the above output, we can see that we are unable to connect to the Kafka cluster using the HTTP protocol. + +```bash +kafka@connectcluster-tls-0:~$ curl -u "$CONNECT_CLUSTER_USER:$CONNECT_CLUSTER_PASSWORD" https://localhost:8083 +curl: (60) SSL certificate problem: unable to get local issuer certificate +More details here: https://curl.se/docs/sslcerts.html + +curl failed to verify the legitimacy of the server and therefore could not +establish a secure connection to it. To learn more about this situation and +how to fix it, please visit the web page mentioned above. +``` + +Here, we can see that we are unable to connect to the Kafka cluster using the HTTPS protocol. This is because the client does not have the CA certificate to verify the server certificate. + +```bash +kafka@connectcluster-tls-0:~$ curl --cacert /var/private/ssl/ca.crt -u "$CONNECT_CLUSTER_USER:$CONNECT_CLUSTER_PASSWORD" https://localhost:8083 +{"version":"3.6.1","commit":"5e3c2b738d253ff5","kafka_cluster_id":"11ef-8f52-c284f2efe29w"} +``` + +From the above output, we can see that we are able to connect to the Kafka ConnectCluster using the TLS configuration. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete kafka -n demo kafka-prod-tls +kubectl delete connectcluster -n demo connectcluster-tls +kubectl delete issuer -n demo connectcluster-ca-issuer +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [Kafka object](/docs/guides/kafka/concepts/kafka.md). +- Monitor your Kafka cluster with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). +- Monitor your Kafka cluster with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/kafka/monitoring/using-builtin-prometheus.md). +- Use [kubedb cli](/docs/guides/kafka/cli/cli.md) to manage databases like kubectl for Kubernetes. +- Detail concepts of [Kafka object](/docs/guides/kafka/concepts/kafka.md). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/kafka/tls/overview.md b/docs/guides/kafka/tls/overview.md index b9977b9e2c..f094edb746 100644 --- a/docs/guides/kafka/tls/overview.md +++ b/docs/guides/kafka/tls/overview.md @@ -51,9 +51,9 @@ Deploying Kafka with TLS/SSL configuration process consists of the following ste 2. Then the user creates a `Kafka` CR which refers to the `Issuer/ClusterIssuer` CR that the user created in the previous step. -3. `KubeDB` Provisioner operator watches for the `Kafka` cr. +3. `KubeDB` Provisioner operator watches for the `Kafka` cr. -4. When it finds one, it creates `Secret`, `Service`, etc. for the `Kafka` database. +4. When it finds one, it creates `Secret`, `Service`, etc. for the `Kafka` cluster. 5. `KubeDB` Ops-manager operator watches for `Kafka`(5c), `Issuer/ClusterIssuer`(5b), `Secret` and `Service`(5a). diff --git a/docs/guides/kafka/tls/topology.md b/docs/guides/kafka/tls/topology.md new file mode 100644 index 0000000000..2c94878d98 --- /dev/null +++ b/docs/guides/kafka/tls/topology.md @@ -0,0 +1,253 @@ +--- +title: Kafka Combined TLS/SSL Encryption +menu: + docs_{{ .version }}: + identifier: kf-tls-topology + name: Topology Cluster + parent: kf-tls + weight: 30 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Run Kafka with TLS/SSL (Transport Encryption) + +KubeDB supports providing TLS/SSL encryption for Kafka. This tutorial will show you how to use KubeDB to run a Kafka cluster with TLS/SSL encryption. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install [`cert-manger`](https://cert-manager.io/docs/installation/) v1.0.0 or later to your cluster to manage your SSL/TLS certificates. + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/kafka](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/kafka) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Overview + +KubeDB uses following crd fields to enable SSL/TLS encryption in Kafka. + +- `spec:` + - `enableSSL` + - `tls:` + - `issuerRef` + - `certificate` + +Read about the fields in details in [kafka concept](/docs/guides/kafka/concepts/kafka.md), + +`tls` is applicable for all types of Kafka (i.e., `combined` and `topology`). + +Users must specify the `tls.issuerRef` field. KubeDB uses the `issuer` or `clusterIssuer` referenced in the `tls.issuerRef` field, and the certificate specs provided in `tls.certificate` to generate certificate secrets. These certificate secrets are then used to generate required certificates including `ca.crt`, `tls.crt`, `tls.key`, `keystore.jks` and `truststore.jks`. + +## Create Issuer/ ClusterIssuer + +We are going to create an example `Issuer` that will be used throughout the duration of this tutorial to enable SSL/TLS in Kafka. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. + +- Start off by generating you ca certificates using openssl. + +```bash +openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=kafka/O=kubedb" +``` + +- Now create a ca-secret using the certificate files you have just generated. + +```bash +kubectl create secret tls kafka-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +``` + +Now, create an `Issuer` using the `ca-secret` you have just created. The `YAML` file looks like this: + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: kafka-ca-issuer + namespace: demo +spec: + ca: + secretName: kafka-ca +``` + +Apply the `YAML` file: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/tls/kf-issuer.yaml +issuer.cert-manager.io/kafka-ca-issuer created +``` + +## TLS/SSL encryption in Kafka Topology Cluster + +```yaml +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-prod-tls + namespace: demo +spec: + version: 3.6.1 + enableSSL: true + tls: + issuerRef: + apiGroup: "cert-manager.io" + kind: Issuer + name: kafka-ca-issuer + topology: + broker: + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + controller: + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut +``` + +### Deploy Kafka Topology Cluster with TLS/SSL + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/tls/kafka-prod-tls.yaml +kafka.kubedb.com/kafka-prod-tls created +``` + +Now, wait until `kafka-prod-tls created` has status `Ready`. i.e, + +```bash +$ watch kubectl get kafka -n demo + +Every 2.0s: kubectl get kafka -n demo aadee: Fri Sep 6 12:34:51 2024 +NAME TYPE VERSION STATUS AGE +kafka-prod-tls kubedb.com/v1 3.6.1 Provisioning 17s +kafka-prod-tls kubedb.com/v1 3.6.1 Provisioning 12s +. +. +kafka-prod-tls kubedb.com/v1 3.6.1 Ready 2m1s +``` + +### Verify TLS/SSL in Kafka Topology Cluster + +```bash +$ kubectl describe secret kafka-prod-tls-client-cert -n demo + +Name: kafka-prod-tls-client-cert +Namespace: demo +Labels: app.kubernetes.io/component=database + app.kubernetes.io/instance=kafka-prod-tls + app.kubernetes.io/managed-by=kubedb.com + app.kubernetes.io/name=kafkas.kubedb.com + controller.cert-manager.io/fao=true +Annotations: cert-manager.io/alt-names: + *.kafka-prod-tls-pods.demo.svc.cluster.local,kafka-prod-tls-pods,kafka-prod-tls-pods.demo.svc,kafka-prod-tls-pods.demo.svc.cluster.local,l... + cert-manager.io/certificate-name: kafka-prod-tls-client-cert + cert-manager.io/common-name: kafka-prod-tls-pods.demo.svc + cert-manager.io/ip-sans: 127.0.0.1 + cert-manager.io/issuer-group: cert-manager.io + cert-manager.io/issuer-kind: Issuer + cert-manager.io/issuer-name: kafka-ca-issuer + cert-manager.io/uri-sans: + +Type: kubernetes.io/tls + +Data +==== +ca.crt: 1184 bytes +keystore.jks: 3254 bytes +tls.crt: 1460 bytes +tls.key: 1708 bytes +truststore.jks: 891 bytes +``` + +Now, Let's exec into a kafka broker pod and verify the configuration that the TLS is enabled. + +```bash +$ kubectl exec -it -n demo kafka-prod-tls-broker-0 -- kafka-configs.sh --bootstrap-server localhost:9092 --command-config /opt/kafka/config/clientauth.properties --describe --entity-type brokers --all | grep 'ssl.keystore' + ssl.keystore.certificate.chain=null sensitive=true synonyms={} + ssl.keystore.key=null sensitive=true synonyms={} + ssl.keystore.location=/var/private/ssl/server.keystore.jks sensitive=false synonyms={STATIC_BROKER_CONFIG:ssl.keystore.location=/var/private/ssl/server.keystore.jks} + ssl.keystore.password=null sensitive=true synonyms={STATIC_BROKER_CONFIG:ssl.keystore.password=null} + ssl.keystore.type=JKS sensitive=false synonyms={DEFAULT_CONFIG:ssl.keystore.type=JKS} + zookeeper.ssl.keystore.location=null sensitive=false synonyms={} + zookeeper.ssl.keystore.password=null sensitive=true synonyms={} + zookeeper.ssl.keystore.type=null sensitive=false synonyms={} + ssl.keystore.certificate.chain=null sensitive=true synonyms={} + ssl.keystore.key=null sensitive=true synonyms={} + ssl.keystore.location=/var/private/ssl/server.keystore.jks sensitive=false synonyms={STATIC_BROKER_CONFIG:ssl.keystore.location=/var/private/ssl/server.keystore.jks} + ssl.keystore.password=null sensitive=true synonyms={STATIC_BROKER_CONFIG:ssl.keystore.password=null} + ssl.keystore.type=JKS sensitive=false synonyms={DEFAULT_CONFIG:ssl.keystore.type=JKS} + zookeeper.ssl.keystore.location=null sensitive=false synonyms={} + zookeeper.ssl.keystore.password=null sensitive=true synonyms={} + zookeeper.ssl.keystore.type=null sensitive=false synonyms={} +``` + +We can see from the above output that, keystore location is `/var/private/ssl/server.keystore.jks` which means that TLS is enabled. + +You will find a file named `clientauth.properties` in the config directory. This file is generated by the operator which contains necessary authentication/authorization/certificate configurations that are required during connect to the Kafka cluster. + +```bash +root@kafka-prod-broker-tls-0:~# cat config/clientauth.properties +sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="*************"; +security.protocol=SASL_SSL +sasl.mechanism=PLAIN +ssl.truststore.location=/var/private/ssl/server.truststore.jks +ssl.truststore.password=*********** +``` + +Now, let's exec into the kafka pod and connect using this configuration to verify the TLS is enabled. + +```bash +$ kubectl exec -it -n demo kafka-prod-broker-tls-0 -- bash +kafka@kafka-prod-broker-tls-0:~$ kafka-metadata-quorum.sh --command-config config/clientauth.properties --bootstrap-server localhost:9092 describe --status +ClusterId: 11ef-921c-f2a07f85765w +LeaderId: 1001 +LeaderEpoch: 17 +HighWatermark: 390 +MaxFollowerLag: 0 +MaxFollowerLagTimeMs: 18 +CurrentVoters: [1000,1001] +CurrentObservers: [0,1] +``` + +From the above output, we can see that we are able to connect to the Kafka cluster using the TLS configuration. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete kafka -n demo kafka-prod-tls +kubectl delete issuer -n demo kafka-ca-issuer +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [Kafka object](/docs/guides/kafka/concepts/kafka.md). +- Monitor your Kafka cluster with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). +- Monitor your Kafka cluster with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/kafka/monitoring/using-builtin-prometheus.md). +- Use [kubedb cli](/docs/guides/kafka/cli/cli.md) to manage databases like kubectl for Kubernetes. +- Detail concepts of [Kafka object](/docs/guides/kafka/concepts/kafka.md). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/kafka/update-version/_index.md b/docs/guides/kafka/update-version/_index.md new file mode 100644 index 0000000000..08f8af5d4f --- /dev/null +++ b/docs/guides/kafka/update-version/_index.md @@ -0,0 +1,10 @@ +--- +title: Update Version +menu: + docs_{{ .version }}: + identifier: kf-update-version + name: UpdateVersion + parent: kf-kafka-guides + weight: 42 +menu_name: docs_{{ .version }} +--- \ No newline at end of file diff --git a/docs/guides/kafka/update-version/overview.md b/docs/guides/kafka/update-version/overview.md new file mode 100644 index 0000000000..78d6eb593a --- /dev/null +++ b/docs/guides/kafka/update-version/overview.md @@ -0,0 +1,54 @@ +--- +title: Update Version Overview +menu: + docs_{{ .version }}: + identifier: kf-update-version-overview + name: Overview + parent: kf-update-version + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Kafka Update Version Overview + +This guide will give you an overview on how KubeDB Ops-manager operator update the version of `Kafka`. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [Kafka](/docs/guides/kafka/concepts/kafka.md) + - [KafkaOpsRequest](/docs/guides/kafka/concepts/kafkaopsrequest.md) + +## How update version Process Works + +The following diagram shows how KubeDB Ops-manager operator used to update the version of `Kafka`. Open the image in a new tab to see the enlarged version. + +
+  updating Process of Kafka +
Fig: updating Process of Kafka
+
+ +The updating process consists of the following steps: + +1. At first, a user creates a `Kafka` Custom Resource (CR). + +2. `KubeDB` Provisioner operator watches the `Kafka` CR. + +3. When the operator finds a `Kafka` CR, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to update the version of the `Kafka` database the user creates a `KafkaOpsRequest` CR with the desired version. + +5. `KubeDB` Ops-manager operator watches the `KafkaOpsRequest` CR. + +6. When it finds a `KafkaOpsRequest` CR, it halts the `Kafka` object which is referred from the `KafkaOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `Kafka` object during the updating process. + +7. By looking at the target version from `KafkaOpsRequest` CR, `KubeDB` Ops-manager operator updates the images of all the `PetSets`. + +8. After successfully updating the `PetSets` and their `Pods` images, the `KubeDB` Ops-manager operator updates the image of the `Kafka` object to reflect the updated state of the database. + +9. After successfully updating of `Kafka` object, the `KubeDB` Ops-manager operator resumes the `Kafka` object so that the `KubeDB` Provisioner operator can resume its usual operations. + +In the next doc, we are going to show a step by step guide on updating of a Kafka database using updateVersion operation. \ No newline at end of file diff --git a/docs/guides/kafka/update-version/update-version.md b/docs/guides/kafka/update-version/update-version.md new file mode 100644 index 0000000000..3345e439bc --- /dev/null +++ b/docs/guides/kafka/update-version/update-version.md @@ -0,0 +1,339 @@ +--- +title: Update Version of Kafka +menu: + docs_{{ .version }}: + identifier: kf-update-version-kafka + name: Kafka + parent: kf-update-version + weight: 30 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Update version of Kafka + +This guide will show you how to use `KubeDB` Ops-manager operator to update the version of `Kafka` Combined or Topology. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [Kafka](/docs/guides/kafka/concepts/kafka.md) + - [KafkaOpsRequest](/docs/guides/kafka/concepts/kafkaopsrequest.md) + - [Updating Overview](/docs/guides/kafka/update-version/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/kafka](/docs/examples/kafka) directory of [kubedb/docs](https://github.com/kube/docs) repository. + +## Prepare Kafka + +Now, we are going to deploy a `Kafka` replicaset database with version `3.6.8`. + +### Deploy Kafka + +In this section, we are going to deploy a Kafka topology cluster. Then, in the next section we will update the version using `KafkaOpsRequest` CRD. Below is the YAML of the `Kafka` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-prod + namespace: demo +spec: + version: 3.6.1 + configSecret: + name: configsecret-topology + topology: + broker: + replicas: 2 + podTemplate: + spec: + containers: + - name: kafka + resources: + requests: + cpu: "500m" + memory: "1Gi" + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + controller: + replicas: 2 + podTemplate: + spec: + containers: + - name: kafka + resources: + requests: + cpu: "500m" + memory: "1Gi" + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut +``` + +Let's create the `Kafka` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/update-version/kafka.yaml +kafka.kubedb.com/kafka-prod created +``` + +Now, wait until `kafka-prod` created has status `Ready`. i.e, + +```bash +$ kubectl get kf -n demo -w +NAME TYPE VERSION STATUS AGE +kafka-prod kubedb.com/v1 3.5.2 Provisioning 0s +kafka-prod kubedb.com/v1 3.5.2 Provisioning 55s +. +. +kafka-prod kubedb.com/v1 3.5.2 Ready 119s +``` + +We are now ready to apply the `KafkaOpsRequest` CR to update. + +### update Kafka Version + +Here, we are going to update `Kafka` from `3.5.2` to `3.6.1`. + +#### Create KafkaOpsRequest: + +In order to update the version, we have to create a `KafkaOpsRequest` CR with your desired version that is supported by `KubeDB`. Below is the YAML of the `KafkaOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kafka-update-version + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: kafka-prod + updateVersion: + targetVersion: 3.6.1 + timeout: 5m + apply: IfReady +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing operation on `kafka-prod` Kafka. +- `spec.type` specifies that we are going to perform `UpdateVersion` on our database. +- `spec.updateVersion.targetVersion` specifies the expected version of the database `3.6.1`. + +> **Note:** If you want to update combined Kafka, you just refer to the `Kafka` combined object name in `spec.databaseRef.name`. To create a combined Kafka, you can refer to the [Kafka Combined](/docs/guides/kafka/clustering/combined-cluster/index.md) guide. + +Let's create the `KafkaOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/update-version/update-version.yaml +kafkaopsrequest.ops.kubedb.com/kafka-update-version created +``` + +#### Verify Kafka version updated successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the image of `Kafka` object and related `PetSets` and `Pods`. + +Let's wait for `KafkaOpsRequest` to be `Successful`. Run the following command to watch `KafkaOpsRequest` CR, + +```bash +$ kubectl get kafkaopsrequest -n demo +NAME TYPE STATUS AGE +kafka-update-version UpdateVersion Successful 2m6s +``` + +We can see from the above output that the `KafkaOpsRequest` has succeeded. If we describe the `KafkaOpsRequest` we will get an overview of the steps that were followed to update the database version. + +```bash +$ kubectl describe kafkaopsrequest -n demo kafka-update-version +Name: kafka-update-version +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: KafkaOpsRequest +Metadata: + Creation Timestamp: 2024-07-30T10:18:44Z + Generation: 1 + Resource Version: 90131 + UID: a274197b-c379-485b-9a36-9eb1e673eee4 +Spec: + Apply: IfReady + Database Ref: + Name: kafka-prod + Timeout: 5m + Type: UpdateVersion + Update Version: + Target Version: 3.6.1 +Status: + Conditions: + Last Transition Time: 2024-07-30T10:18:44Z + Message: Kafka ops-request has started to update version + Observed Generation: 1 + Reason: UpdateVersion + Status: True + Type: UpdateVersion + Last Transition Time: 2024-07-30T10:18:54Z + Message: successfully reconciled the Kafka with updated version + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2024-07-30T10:18:59Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-controller-0 + Last Transition Time: 2024-07-30T10:18:59Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-controller-0 + Last Transition Time: 2024-07-30T10:19:19Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-controller-0 + Last Transition Time: 2024-07-30T10:19:24Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-controller-1 + Last Transition Time: 2024-07-30T10:19:24Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-controller-1 + Last Transition Time: 2024-07-30T10:19:49Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-controller-1 + Last Transition Time: 2024-07-30T10:19:54Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-broker-0 + Last Transition Time: 2024-07-30T10:19:54Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-broker-0 + Last Transition Time: 2024-07-30T10:20:14Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-broker-0 + Last Transition Time: 2024-07-30T10:20:19Z + Message: get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: GetPod--kafka-prod-broker-1 + Last Transition Time: 2024-07-30T10:20:19Z + Message: evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: EvictPod--kafka-prod-broker-1 + Last Transition Time: 2024-07-30T10:20:44Z + Message: check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--kafka-prod-broker-1 + Last Transition Time: 2024-07-30T10:20:49Z + Message: Successfully Restarted Kafka nodes + Observed Generation: 1 + Reason: RestartPods + Status: True + Type: RestartPods + Last Transition Time: 2024-07-30T10:20:50Z + Message: Successfully completed update kafka version + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 3m7s KubeDB Ops-manager Operator Start processing for KafkaOpsRequest: demo/kafka-update-version + Normal Starting 3m7s KubeDB Ops-manager Operator Pausing Kafka databse: demo/kafka-prod + Normal Successful 3m7s KubeDB Ops-manager Operator Successfully paused Kafka database: demo/kafka-prod for KafkaOpsRequest: kafka-update-version + Normal UpdatePetSets 2m57s KubeDB Ops-manager Operator successfully reconciled the Kafka with updated version + Warning get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 2m52s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 2m52s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-controller-0 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-controller-0 2m47s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-controller-0 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 2m32s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-controller-0 + Warning get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 2m27s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 2m27s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-controller-1 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-controller-1 2m22s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-controller-1 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 2m2s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-controller-1 + Warning get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 117s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 117s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-broker-0 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-broker-0 112s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-broker-0 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 97s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-broker-0 + Warning get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 92s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Warning evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 92s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:kafka-prod-broker-1 + Warning check pod running; ConditionStatus:False; PodName:kafka-prod-broker-1 87s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:kafka-prod-broker-1 + Warning check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 67s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:kafka-prod-broker-1 + Normal RestartPods 62s KubeDB Ops-manager Operator Successfully Restarted Kafka nodes + Normal Starting 62s KubeDB Ops-manager Operator Resuming Kafka database: demo/kafka-prod + Normal Successful 61s KubeDB Ops-manager Operator Successfully resumed Kafka database: demo/kafka-prod for KafkaOpsRequest: kafka-update-version +``` + +Now, we are going to verify whether the `Kafka` and the related `PetSets` and their `Pods` have the new version image. Let's check, + +```bash +$ kubectl get kf -n demo kafka-prod -o=jsonpath='{.spec.version}{"\n"}' +3.6.1 + +$ kubectl get petset -n demo kafka-prod-broker -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}' +ghcr.io/appscode-images/kafka-kraft:3.6.1@sha256:e251d3c0ceee0db8400b689e42587985034852a8a6c81b5973c2844e902e6d11 + +$ kubectl get pods -n demo kafka-prod-broker-0 -o=jsonpath='{.spec.containers[0].image}{"\n"}' +ghcr.io/appscode-images/kafka-kraft:3.6.1@sha256:e251d3c0ceee0db8400b689e42587985034852a8a6c81b5973c2844e902e6d11 +``` + +You can see from above, our `Kafka` has been updated with the new version. So, the updateVersion process is successfully completed. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete kafkaopsrequest -n demo kafka-update-version +kubectl delete kf -n demo kafka-prod +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [Kafka object](/docs/guides/kafka/concepts/kafka.md). +- Different Kafka topology clustering modes [here](/docs/guides/kafka/clustering/_index.md). +- Monitor your Kafka database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). + +[//]: # (- Monitor your Kafka database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/kafka/monitoring/using-builtin-prometheus.md).) +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/kafka/volume-expansion/_index.md b/docs/guides/kafka/volume-expansion/_index.md new file mode 100644 index 0000000000..27c8e1f8ba --- /dev/null +++ b/docs/guides/kafka/volume-expansion/_index.md @@ -0,0 +1,10 @@ +--- +title: Volume Expansion +menu: + docs_{{ .version }}: + identifier: kf-volume-expansion + name: Volume Expansion + parent: kf-kafka-guides + weight: 44 +menu_name: docs_{{ .version }} +--- \ No newline at end of file diff --git a/docs/guides/kafka/volume-expansion/combined.md b/docs/guides/kafka/volume-expansion/combined.md new file mode 100644 index 0000000000..c20acc2ad1 --- /dev/null +++ b/docs/guides/kafka/volume-expansion/combined.md @@ -0,0 +1,312 @@ +--- +title: Kafka Combined Volume Expansion +menu: + docs_{{ .version }}: + identifier: kf-volume-expansion-combined + name: Combined + parent: kf-volume-expansion + weight: 30 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Kafka Combined Volume Expansion + +This guide will show you how to use `KubeDB` Ops-manager operator to expand the volume of a Kafka Combined Cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- You must have a `StorageClass` that supports volume expansion. + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [Kafka](/docs/guides/kafka/concepts/kafka.md) + - [Combined](/docs/guides/kafka/clustering/combined-cluster/index.md) + - [KafkaOpsRequest](/docs/guides/kafka/concepts/kafkaopsrequest.md) + - [Volume Expansion Overview](/docs/guides/kafka/volume-expansion/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: The yaml files used in this tutorial are stored in [docs/examples/kafka](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/kafka) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Expand Volume of Combined Kafka Cluster + +Here, we are going to deploy a `Kafka` combined using a supported version by `KubeDB` operator. Then we are going to apply `KafkaOpsRequest` to expand its volume. + +### Prepare Kafka Combined CLuster + +At first verify that your cluster has a storage class, that supports volume expansion. Let's check, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) kubernetes.io/gce-pd Delete Immediate true 2m49s +``` + +We can see from the output the `standard` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We can use it. + +Now, we are going to deploy a `Kafka` combined cluster with version `3.6.1`. + +### Deploy Kafka + +In this section, we are going to deploy a Kafka combined cluster with 1GB volume. Then, in the next section we will expand its volume to 2GB using `KafkaOpsRequest` CRD. Below is the YAML of the `Kafka` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-dev + namespace: demo +spec: + replicas: 2 + version: 3.6.1 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut +``` + +Let's create the `Kafka` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/volume-expansion/kafka-combined.yaml +kafka.kubedb.com/kafka-dev created +``` + +Now, wait until `kafka-dev` has status `Ready`. i.e, + +```bash +$ kubectl get kf -n demo -w +NAME TYPE VERSION STATUS AGE +kafka-dev kubedb.com/v1 3.6.1 Provisioning 0s +kafka-dev kubedb.com/v1 3.6.1 Provisioning 24s +. +. +kafka-dev kubedb.com/v1 3.6.1 Ready 92s +``` + +Let's check volume size from petset, and from the persistent volume, + +```bash +$ kubectl get petset -n demo kafka-dev -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-23778f6015324895 1Gi RWO Delete Bound demo/kafka-dev-data-kafka-dev-1 standard 33s +pvc-30b34f642f994e13 1Gi RWO Delete Bound demo/kafka-dev-data-kafka-dev-0 standard 58s +``` + +You can see the petset has 1GB storage, and the capacity of all the persistent volumes are also 1GB. + +We are now ready to apply the `KafkaOpsRequest` CR to expand the volume of this database. + +### Volume Expansion + +Here, we are going to expand the volume of the kafka combined cluster. + +#### Create KafkaOpsRequest + +In order to expand the volume of the database, we have to create a `KafkaOpsRequest` CR with our desired volume size. Below is the YAML of the `KafkaOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kf-volume-exp-combined + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: kafka-dev + volumeExpansion: + node: 2Gi + mode: Online +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing volume expansion operation on `kafka-dev`. +- `spec.type` specifies that we are performing `VolumeExpansion` on our database. +- `spec.volumeExpansion.node` specifies the desired volume size. + +Let's create the `KafkaOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/volume-expansion/kafka-volume-expansion-combined.yaml +kafkaopsrequest.ops.kubedb.com/kf-volume-exp-combined created +``` + +#### Verify Kafka Combined volume expanded successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the volume size of `Kafka` object and related `PetSets` and `Persistent Volumes`. + +Let's wait for `KafkaOpsRequest` to be `Successful`. Run the following command to watch `KafkaOpsRequest` CR, + +```bash +$ kubectl get kafkaopsrequest -n demo +NAME TYPE STATUS AGE +kf-volume-exp-combined VolumeExpansion Successful 2m4s +``` + +We can see from the above output that the `KafkaOpsRequest` has succeeded. If we describe the `KafkaOpsRequest` we will get an overview of the steps that were followed to expand the volume of the database. + +```bash +$ kubectl describe kafkaopsrequest -n demo kf-volume-exp-combined +Name: kf-volume-exp-combined +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: KafkaOpsRequest +Metadata: + Creation Timestamp: 2024-07-30T10:45:57Z + Generation: 1 + Resource Version: 91816 + UID: 0febb459-3373-4f75-b7da-46391edf557f +Spec: + Apply: IfReady + Database Ref: + Name: kafka-dev + Type: VolumeExpansion + Volume Expansion: + Mode: Online + Node: 2Gi +Status: + Conditions: + Last Transition Time: 2024-07-30T10:45:57Z + Message: Kafka ops-request has started to expand volume of kafka nodes. + Observed Generation: 1 + Reason: VolumeExpansion + Status: True + Type: VolumeExpansion + Last Transition Time: 2024-07-30T10:46:05Z + Message: get pet set; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetPetSet + Last Transition Time: 2024-07-30T10:46:05Z + Message: is petset deleted; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: IsPetsetDeleted + Last Transition Time: 2024-07-30T10:46:15Z + Message: successfully deleted the petSets with orphan propagation policy + Observed Generation: 1 + Reason: OrphanPetSetPods + Status: True + Type: OrphanPetSetPods + Last Transition Time: 2024-07-30T10:46:20Z + Message: get pvc; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetPvc + Last Transition Time: 2024-07-30T10:46:20Z + Message: is pvc patched; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: IsPvcPatched + Last Transition Time: 2024-07-30T10:46:25Z + Message: compare storage; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: CompareStorage + Last Transition Time: 2024-07-30T10:46:40Z + Message: successfully updated combined node PVC sizes + Observed Generation: 1 + Reason: UpdateCombinedNodePVCs + Status: True + Type: UpdateCombinedNodePVCs + Last Transition Time: 2024-07-30T10:46:45Z + Message: successfully reconciled the Kafka resources + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2024-07-30T10:46:51Z + Message: PetSet is recreated + Observed Generation: 1 + Reason: ReadyPetSets + Status: True + Type: ReadyPetSets + Last Transition Time: 2024-07-30T10:46:51Z + Message: Successfully completed volumeExpansion for kafka + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 24m KubeDB Ops-manager Operator Start processing for KafkaOpsRequest: demo/kf-volume-exp-combined + Normal Starting 24m KubeDB Ops-manager Operator Pausing Kafka databse: demo/kafka-dev + Normal Successful 24m KubeDB Ops-manager Operator Successfully paused Kafka database: demo/kafka-dev for KafkaOpsRequest: kf-volume-exp-combined + Warning get pet set; ConditionStatus:True 24m KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Warning is petset deleted; ConditionStatus:True 24m KubeDB Ops-manager Operator is petset deleted; ConditionStatus:True + Warning get pet set; ConditionStatus:True 23m KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Normal OrphanPetSetPods 23m KubeDB Ops-manager Operator successfully deleted the petSets with orphan propagation policy + Warning get pvc; ConditionStatus:True 23m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning is pvc patched; ConditionStatus:True 23m KubeDB Ops-manager Operator is pvc patched; ConditionStatus:True + Warning get pvc; ConditionStatus:True 23m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:True 23m KubeDB Ops-manager Operator compare storage; ConditionStatus:True + Warning get pvc; ConditionStatus:True 23m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning is pvc patched; ConditionStatus:True 23m KubeDB Ops-manager Operator is pvc patched; ConditionStatus:True + Warning get pvc; ConditionStatus:True 23m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:True 23m KubeDB Ops-manager Operator compare storage; ConditionStatus:True + Normal UpdateCombinedNodePVCs 23m KubeDB Ops-manager Operator successfully updated combined node PVC sizes + Normal UpdatePetSets 23m KubeDB Ops-manager Operator successfully reconciled the Kafka resources + Warning get pet set; ConditionStatus:True 23m KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Normal ReadyPetSets 23m KubeDB Ops-manager Operator PetSet is recreated + Normal Starting 23m KubeDB Ops-manager Operator Resuming Kafka database: demo/kafka-dev + Normal Successful 23m KubeDB Ops-manager Operator Successfully resumed Kafka database: demo/kafka-dev for KafkaOpsRequest: kf-volume-exp-combined +``` + +Now, we are going to verify from the `Petset`, and the `Persistent Volumes` whether the volume of the database has expanded to meet the desired state, Let's check, + +```bash +$ kubectl get petset -n demo kafka-dev -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"2Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-23778f6015324895 2Gi RWO Delete Bound demo/kafka-dev-data-kafka-dev-1 standard 7m2s +pvc-30b34f642f994e13 2Gi RWO Delete Bound demo/kafka-dev-data-kafka-dev-0 standard 7m9s +``` + +The above output verifies that we have successfully expanded the volume of the Kafka. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete kafkaopsrequest -n demo kf-volume-exp-combined +kubectl delete kf -n demo kafka-dev +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [Kafka object](/docs/guides/kafka/concepts/kafka.md). +- Different Kafka topology clustering modes [here](/docs/guides/kafka/clustering/_index.md). +- Monitor your Kafka database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). + +[//]: # (- Monitor your Kafka database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/kafka/monitoring/using-builtin-prometheus.md).) +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/kafka/volume-expansion/overview.md b/docs/guides/kafka/volume-expansion/overview.md new file mode 100644 index 0000000000..adb8d485f6 --- /dev/null +++ b/docs/guides/kafka/volume-expansion/overview.md @@ -0,0 +1,56 @@ +--- +title: Kafka Volume Expansion Overview +menu: + docs_{{ .version }}: + identifier: kf-volume-expansion-overview + name: Overview + parent: kf-volume-expansion + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Kafka Volume Expansion + +This guide will give an overview on how KubeDB Ops-manager operator expand the volume of various component of `Kafka` like:. (Combined and Topology). + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [Kafka](/docs/guides/kafka/concepts/kafka.md) + - [KafkaOpsRequest](/docs/guides/kafka/concepts/kafkaopsrequest.md) + +## How Volume Expansion Process Works + +The following diagram shows how KubeDB Ops-manager operator expand the volumes of `Kafka` database components. Open the image in a new tab to see the enlarged version. + +
+  Volume Expansion process of Kafka +
Fig: Volume Expansion process of Kafka
+
+ +The Volume Expansion process consists of the following steps: + +1. At first, a user creates a `Kafka` Custom Resource (CR). + +2. `KubeDB` Provisioner operator watches the `Kafka` CR. + +3. When the operator finds a `Kafka` CR, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc. + +4. Each PetSet creates a Persistent Volume according to the Volume Claim Template provided in the petset configuration. This Persistent Volume will be expanded by the `KubeDB` Ops-manager operator. + +5. Then, in order to expand the volume of the various components (ie. Combined, Broker, Controller) of the `Kafka`, the user creates a `KafkaOpsRequest` CR with desired information. + +6. `KubeDB` Ops-manager operator watches the `KafkaOpsRequest` CR. + +7. When it finds a `KafkaOpsRequest` CR, it halts the `Kafka` object which is referred from the `KafkaOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `Kafka` object during the volume expansion process. + +8. Then the `KubeDB` Ops-manager operator will expand the persistent volume to reach the expected size defined in the `KafkaOpsRequest` CR. + +9. After the successful Volume Expansion of the related PetSet Pods, the `KubeDB` Ops-manager operator updates the new volume size in the `Kafka` object to reflect the updated state. + +10. After the successful Volume Expansion of the `Kafka` components, the `KubeDB` Ops-manager operator resumes the `Kafka` object so that the `KubeDB` Provisioner operator resumes its usual operations. + +In the next docs, we are going to show a step-by-step guide on Volume Expansion of various Kafka database components using `KafkaOpsRequest` CRD. diff --git a/docs/guides/kafka/volume-expansion/topology.md b/docs/guides/kafka/volume-expansion/topology.md new file mode 100644 index 0000000000..253a004d65 --- /dev/null +++ b/docs/guides/kafka/volume-expansion/topology.md @@ -0,0 +1,357 @@ +--- +title: Kafka Topology Volume Expansion +menu: + docs_{{ .version }}: + identifier: kf-volume-expansion-topology + name: Topology + parent: kf-volume-expansion + weight: 30 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Kafka Topology Volume Expansion + +This guide will show you how to use `KubeDB` Ops-manager operator to expand the volume of a Kafka Topology Cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- You must have a `StorageClass` that supports volume expansion. + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [Kafka](/docs/guides/kafka/concepts/kafka.md) + - [Topology](/docs/guides/kafka/clustering/topology-cluster/index.md) + - [KafkaOpsRequest](/docs/guides/kafka/concepts/kafkaopsrequest.md) + - [Volume Expansion Overview](/docs/guides/kafka/volume-expansion/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: The yaml files used in this tutorial are stored in [docs/examples/kafka](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/kafka) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Expand Volume of Topology Kafka Cluster + +Here, we are going to deploy a `Kafka` topology using a supported version by `KubeDB` operator. Then we are going to apply `KafkaOpsRequest` to expand its volume. + +### Prepare Kafka Topology Cluster + +At first verify that your cluster has a storage class, that supports volume expansion. Let's check, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) kubernetes.io/gce-pd Delete Immediate true 2m49s +``` + +We can see from the output the `standard` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We can use it. + +Now, we are going to deploy a `Kafka` combined cluster with version `3.6.1`. + +### Deploy Kafka + +In this section, we are going to deploy a Kafka topology cluster for broker and controller with 1GB volume. Then, in the next section we will expand its volume to 2GB using `KafkaOpsRequest` CRD. Below is the YAML of the `Kafka` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-prod + namespace: demo +spec: + version: 3.6.1 + topology: + broker: + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + controller: + replicas: 2 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut +``` + +Let's create the `Kafka` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/volume-expansion/kafka-topology.yaml +kafka.kubedb.com/kafka-prod created +``` + +Now, wait until `kafka-prod` has status `Ready`. i.e, + +```bash +$ kubectl get kf -n demo -w +NAME TYPE VERSION STATUS AGE +kafka-prod kubedb.com/v1 3.6.1 Provisioning 0s +kafka-prod kubedb.com/v1 3.6.1 Provisioning 9s +. +. +kafka-prod kubedb.com/v1 3.6.1 Ready 2m10s +``` + +Let's check volume size from petset, and from the persistent volume, + +```bash +$ kubectl get petset -n demo kafka-prod-broker -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" + +$ kubectl get petset -n demo kafka-prod-controller -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-3f177a92721440bb 1Gi RWO Delete Bound demo/kafka-prod-data-kafka-prod-controller-0 standard 106s +pvc-86ff354122324b1c 1Gi RWO Delete Bound demo/kafka-prod-data-kafka-prod-broker-1 standard 78s +pvc-9fa35d773aa74bd0 1Gi RWO Delete Bound demo/kafka-prod-data-kafka-prod-controller-1 standard 75s +pvc-ccf50adf179e4162 1Gi RWO Delete Bound demo/kafka-prod-data-kafka-prod-broker-0 standard 106s +``` + +You can see the petsets have 1GB storage, and the capacity of all the persistent volumes are also 1GB. + +We are now ready to apply the `KafkaOpsRequest` CR to expand the volume of this database. + +### Volume Expansion + +Here, we are going to expand the volume of the kafka topology cluster. + +#### Create KafkaOpsRequest + +In order to expand the volume of the database, we have to create a `KafkaOpsRequest` CR with our desired volume size. Below is the YAML of the `KafkaOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kf-volume-exp-topology + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: kafka-prod + volumeExpansion: + broker: 3Gi + controller: 2Gi + mode: Online +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing volume expansion operation on `kafka-prod`. +- `spec.type` specifies that we are performing `VolumeExpansion` on our database. +- `spec.volumeExpansion.broker` specifies the desired volume size for broker node. +- `spec.volumeExpansion.controller` specifies the desired volume size for controller node. + +> If you want to expand the volume of only one node, you can specify the desired volume size for that node only. + +Let's create the `KafkaOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/volume-expansion/kafka-volume-expansion-topology.yaml +kafkaopsrequest.ops.kubedb.com/kf-volume-exp-topology created +``` + +#### Verify Kafka Topology volume expanded successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the volume size of `Kafka` object and related `PetSets` and `Persistent Volumes`. + +Let's wait for `KafkaOpsRequest` to be `Successful`. Run the following command to watch `KafkaOpsRequest` CR, + +```bash +$ kubectl get kafkaopsrequest -n demo +NAME TYPE STATUS AGE +kf-volume-exp-topology VolumeExpansion Successful 3m1s +``` + +We can see from the above output that the `KafkaOpsRequest` has succeeded. If we describe the `KafkaOpsRequest` we will get an overview of the steps that were followed to expand the volume of kafka. + +```bash +$ kubectl describe kafkaopsrequest -n demo kf-volume-exp-topology +Name: kf-volume-exp-topology +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: KafkaOpsRequest +Metadata: + Creation Timestamp: 2024-07-31T04:44:17Z + Generation: 1 + Resource Version: 149682 + UID: e0e19d97-7150-463c-9a7d-53eff05ea6c4 +Spec: + Apply: IfReady + Database Ref: + Name: kafka-prod + Type: VolumeExpansion + Volume Expansion: + Broker: 3Gi + Controller: 2Gi + Mode: Online +Status: + Conditions: + Last Transition Time: 2024-07-31T04:44:17Z + Message: Kafka ops-request has started to expand volume of kafka nodes. + Observed Generation: 1 + Reason: VolumeExpansion + Status: True + Type: VolumeExpansion + Last Transition Time: 2024-07-31T04:44:25Z + Message: get pet set; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetPetSet + Last Transition Time: 2024-07-31T04:44:25Z + Message: is petset deleted; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: IsPetsetDeleted + Last Transition Time: 2024-07-31T04:44:45Z + Message: successfully deleted the petSets with orphan propagation policy + Observed Generation: 1 + Reason: OrphanPetSetPods + Status: True + Type: OrphanPetSetPods + Last Transition Time: 2024-07-31T04:44:50Z + Message: get pvc; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetPvc + Last Transition Time: 2024-07-31T04:44:50Z + Message: is pvc patched; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: IsPvcPatched + Last Transition Time: 2024-07-31T04:44:55Z + Message: compare storage; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: CompareStorage + Last Transition Time: 2024-07-31T04:45:10Z + Message: successfully updated controller node PVC sizes + Observed Generation: 1 + Reason: UpdateControllerNodePVCs + Status: True + Type: UpdateControllerNodePVCs + Last Transition Time: 2024-07-31T04:45:35Z + Message: successfully updated broker node PVC sizes + Observed Generation: 1 + Reason: UpdateBrokerNodePVCs + Status: True + Type: UpdateBrokerNodePVCs + Last Transition Time: 2024-07-31T04:45:42Z + Message: successfully reconciled the Kafka resources + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2024-07-31T04:45:47Z + Message: PetSet is recreated + Observed Generation: 1 + Reason: ReadyPetSets + Status: True + Type: ReadyPetSets + Last Transition Time: 2024-07-31T04:45:47Z + Message: Successfully completed volumeExpansion for kafka + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 116s KubeDB Ops-manager Operator Start processing for KafkaOpsRequest: demo/kf-volume-exp-topology + Normal Starting 116s KubeDB Ops-manager Operator Pausing Kafka databse: demo/kafka-prod + Normal Successful 116s KubeDB Ops-manager Operator Successfully paused Kafka database: demo/kafka-prod for KafkaOpsRequest: kf-volume-exp-topology + Warning get pet set; ConditionStatus:True 108s KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Warning is petset deleted; ConditionStatus:True 108s KubeDB Ops-manager Operator is petset deleted; ConditionStatus:True + Warning get pet set; ConditionStatus:True 103s KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Warning get pet set; ConditionStatus:True 98s KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Warning is petset deleted; ConditionStatus:True 98s KubeDB Ops-manager Operator is petset deleted; ConditionStatus:True + Warning get pet set; ConditionStatus:True 93s KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Normal OrphanPetSetPods 88s KubeDB Ops-manager Operator successfully deleted the petSets with orphan propagation policy + Warning get pvc; ConditionStatus:True 83s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning is pvc patched; ConditionStatus:True 83s KubeDB Ops-manager Operator is pvc patched; ConditionStatus:True + Warning get pvc; ConditionStatus:True 78s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:True 78s KubeDB Ops-manager Operator compare storage; ConditionStatus:True + Warning get pvc; ConditionStatus:True 73s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning is pvc patched; ConditionStatus:True 73s KubeDB Ops-manager Operator is pvc patched; ConditionStatus:True + Warning get pvc; ConditionStatus:True 68s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:True 68s KubeDB Ops-manager Operator compare storage; ConditionStatus:True + Normal UpdateControllerNodePVCs 63s KubeDB Ops-manager Operator successfully updated controller node PVC sizes + Warning get pvc; ConditionStatus:True 58s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning is pvc patched; ConditionStatus:True 58s KubeDB Ops-manager Operator is pvc patched; ConditionStatus:True + Warning get pvc; ConditionStatus:True 53s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:True 53s KubeDB Ops-manager Operator compare storage; ConditionStatus:True + Warning get pvc; ConditionStatus:True 48s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning is pvc patched; ConditionStatus:True 48s KubeDB Ops-manager Operator is pvc patched; ConditionStatus:True + Warning get pvc; ConditionStatus:True 43s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:True 43s KubeDB Ops-manager Operator compare storage; ConditionStatus:True + Normal UpdateBrokerNodePVCs 38s KubeDB Ops-manager Operator successfully updated broker node PVC sizes + Normal UpdatePetSets 31s KubeDB Ops-manager Operator successfully reconciled the Kafka resources + Warning get pet set; ConditionStatus:True 26s KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Warning get pet set; ConditionStatus:True 26s KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Normal ReadyPetSets 26s KubeDB Ops-manager Operator PetSet is recreated + Normal Starting 26s KubeDB Ops-manager Operator Resuming Kafka database: demo/kafka-prod + Normal Successful 26s KubeDB Ops-manager Operator Successfully resumed Kafka database: demo/kafka-prod for KafkaOpsRequest: kf-volume-exp-topology +``` + +Now, we are going to verify from the `Petset`, and the `Persistent Volumes` whether the volume of the database has expanded to meet the desired state, Let's check, + +```bash +$ kubectl get petset -n demo kafka-prod-broker -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"3Gi" + +$ kubectl get petset -n demo kafka-prod-controller -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"2Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-3f177a92721440bb 1Gi RWO Delete Bound demo/kafka-prod-data-kafka-prod-controller-0 standard 5m25s +pvc-86ff354122324b1c 1Gi RWO Delete Bound demo/kafka-prod-data-kafka-prod-broker-1 standard 4m51s +pvc-9fa35d773aa74bd0 1Gi RWO Delete Bound demo/kafka-prod-data-kafka-prod-controller-1 standard 5m1s +pvc-ccf50adf179e4162 1Gi RWO Delete Bound demo/kafka-prod-data-kafka-prod-broker-0 standard 5m30s +``` + +The above output verifies that we have successfully expanded the volume of the Kafka. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete kafkaopsrequest -n demo kf-volume-exp-topology +kubectl delete kf -n demo kafka-prod +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [Kafka object](/docs/guides/kafka/concepts/kafka.md). +- Different Kafka topology clustering modes [here](/docs/guides/kafka/clustering/_index.md). +- Monitor your Kafka database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). +- +[//]: # (- Monitor your Kafka database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/kafka/monitoring/using-builtin-prometheus.md).) +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/images/day-2-operation/kafka/kf-compute-autoscaling.svg b/docs/images/day-2-operation/kafka/kf-compute-autoscaling.svg new file mode 100644 index 0000000000..aef6b92242 --- /dev/null +++ b/docs/images/day-2-operation/kafka/kf-compute-autoscaling.svg @@ -0,0 +1,148 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/images/day-2-operation/kafka/kf-horizontal-scaling.svg b/docs/images/day-2-operation/kafka/kf-horizontal-scaling.svg new file mode 100644 index 0000000000..8fb704214e --- /dev/null +++ b/docs/images/day-2-operation/kafka/kf-horizontal-scaling.svg @@ -0,0 +1,100 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/images/day-2-operation/kafka/kf-reconfigure-tls.svg b/docs/images/day-2-operation/kafka/kf-reconfigure-tls.svg new file mode 100644 index 0000000000..eaf7f4a34b --- /dev/null +++ b/docs/images/day-2-operation/kafka/kf-reconfigure-tls.svg @@ -0,0 +1,100 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/images/day-2-operation/kafka/kf-reconfigure.svg b/docs/images/day-2-operation/kafka/kf-reconfigure.svg new file mode 100644 index 0000000000..65e85743b2 --- /dev/null +++ b/docs/images/day-2-operation/kafka/kf-reconfigure.svg @@ -0,0 +1,99 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/images/day-2-operation/kafka/kf-storage-autoscaling.svg b/docs/images/day-2-operation/kafka/kf-storage-autoscaling.svg new file mode 100644 index 0000000000..73553319b6 --- /dev/null +++ b/docs/images/day-2-operation/kafka/kf-storage-autoscaling.svg @@ -0,0 +1,174 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/images/day-2-operation/kafka/kf-update-version.svg b/docs/images/day-2-operation/kafka/kf-update-version.svg new file mode 100644 index 0000000000..1e8fcdbc3c --- /dev/null +++ b/docs/images/day-2-operation/kafka/kf-update-version.svg @@ -0,0 +1,105 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/images/day-2-operation/kafka/kf-vertical-scaling.svg b/docs/images/day-2-operation/kafka/kf-vertical-scaling.svg new file mode 100644 index 0000000000..ebd128daf0 --- /dev/null +++ b/docs/images/day-2-operation/kafka/kf-vertical-scaling.svg @@ -0,0 +1,105 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/images/day-2-operation/kafka/kf-volume-expansion.svg b/docs/images/day-2-operation/kafka/kf-volume-expansion.svg new file mode 100644 index 0000000000..fd87222914 --- /dev/null +++ b/docs/images/day-2-operation/kafka/kf-volume-expansion.svg @@ -0,0 +1,145 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/images/kafka/monitoring/kafka-builtin-prom-target.png b/docs/images/kafka/monitoring/kafka-builtin-prom-target.png new file mode 100644 index 0000000000000000000000000000000000000000..24092fe6200e3271c20c96b1c9fc535795b9fc2b GIT binary patch literal 110573 zcmeFZXIN9|*Ej0uATy#eAcBa%7)3>zf`F8ONz`JZS^nDP&w{y9?{nddt zaUT6}pOWm3Pbnk9Dwm9hq?`F19!<*ljJE0WjHJtou1Q0e70XJkt|kth69~oiNAZie zWt_+ez}CJE$r*Z^^wLqg2gHAFdogwJ)7*sjkm!rUFFvAGe@(gQrKY5%X)+EYMc`iq z7rq3tC%`RYDkio1JNnJn1CK8%YK;n8T|U6X6HSc7Iq<_TybHQE(?3gJzhrzR?i+eB z;UvG>(Q+nE_J$%Wxu+S}sM|GSY zX7^LiUgV}}#;(q4yfkRP=>^WYWt~baG0K{1*EWw?6?>VrDfMEz{JQ8A+n4P~)EM$y zbY)a0tXy0ia&>uUtlh9q)~3fnGKm|mS3X`>C7#;LE0q+%7O`!i6+yO&Xmi4>6k|S~ zP6@Kj5VHkwUXW4aRfIU0q@-i&yB-F4xJL7&E7epUF47n?B zP;$%9>!dwhQ%x%SlE#0fxI<5ZZ~a^vqYXab*i|A_;>E1g`Y8G9qzC^xf+=u;HRGf? zhwTDWqq^Ye!O%jJ>*WKEE6=_m73u-OCe6yt#UW=$T9n^_v@87Tti)wV!cy`W6TCPr{Ba#=8aR$_QZvg8aq$D53tbR_mcE`@-$4-04x_^DE z!T0qr!eDu~PO&Ug2)FVnn!{nD8Rb*DWvoy`DN7&WygP4k&Gph#ngBI0r)gous;OqK z!Et8HZ6R?ThRyR#IW{LILezJjqnsUhzCgnCJHf;uwl7vVaY=krgL=20_^&>7JD3e9 z%uRtbjqtJ(tA zF5;avY#TaGZGQ=C8nz4(gnLCUmL?@| zuhj_4JwZ=ycwlCGjp@!6b_%RW24K7`HmP;+MYr{X=qqh}Nu-c-jVKaPrlJPyYC(eP zdm`Zs|Kp1a)@f_^paN+=$forG6;!e?Y7b@QZaAngc{udO}{TH01SYo6$+nhZsq7-6HT0b_yFpC6ccb z%k2;PB%w`XP*ZJc)$q2we#&79WuWtE{BWIIe}2o}z^1&L!wg(a8NAz%=veZgJjsT- zC68?$T*oH@FIjHNEu72{KbTAWQ1fm^-Sw|tA8G%vyn*m|U&m+8Ij4C?y6cO|7iX${ z>70TS5v>hksG6%XYC4Nzsn*i5RbQ?nW(5hpuP)2LT3dpnbs3J2X1)P}U(F|{N$jMh zXs@~1E}Ch2KL~eNke>|6l`qC9yHn%Kk~1y;{IG0l)Y)c}cD5!>)^wQXwElL9V!wvKvN>mPX)$3^>82em{^Mm+L(-ypAc z)nTA|3KXajYyMCwHZ;X)-%%YeTC|20=$&1fOSN4J^I_P8#7 zyF-+aIe%p>mMm*ZfyR(-&u^1Q%6gm?ZPe#$s?GCdYMRjVtA zQh>0-A?7ro;i}>s3Kko7(G+hASD@#bE}!}u!!E1Om2^|7wF z?RP94jC-}Bi>j>$`QU-#mcG1Nj1$14&Dug)58~tkVpq>D8`T%zOOvnz9Dqf(zwP4s zQr3@FOqXA?`g2U$x$l`0xv5?6in5WHXNdz#2@hV(?sXisbQ)5!k=)rNr`g-4&pFh~ zOuWx-+)F(4k6q=4 z(#{FgLA+#g1SV(4La{sS#^>7TA*#J2So7YDbDl6@wtlqSSR_XC93v(6A+YC#TEXVmZB&XUjMMN zdJi-lu6!w_H?2>_{e5GNi(GQ^3cN%)OuhKDH)SdbpvahA^YncxGFjVN|G7$W#+DZM zRRA9fUecAwFIj%1S`uF7UuJ2T;+Y=SvmFkg?(uKG64J7&zjGz-Mw+0sU9m)2eX-rE zP2RusxdatUe5S`Qh#EWQ3z@oJ$byA#-sE!_&|8sfyS|yi>3+g7CP?2i&KhKH!h^X5 zIgq=q13_O8@@RvCD1Ey9QxOoY!@EhFnlN`S#|9HYX)Py7^O_o)%V)Bu764X>uX6)P zE=%VXG1a1JqQST7Td~!HQGOYg4t#ISb~R_8fRkn6<#Fvh%`fiV3Uxc(@<|--SKe|P z?cJd}1$kAv^J;~0`b17CEQ zZyF&|{dr;S`1vg;?}(Quq~4|6dA2aT#d($!c}u9$AEsTaBsOynq|*4>;HhSYj7OI^ic- zhGtflE!ithn-mtE7-qNsdh1A^Ko4g3q3!il?MVx05N@DTPu{gL1H+h~ps$5d-j zjpR38gyIA6?v3`SYKv+eubtb^F&_;JmWzKKyW^)Qub-yS7RVtcL%z155Rr_2pC=*> z?~oxsw+2aNa9vw`o{$At8!H|-$MlDubLXpSJKQ}xlIVC`qZlW>AS96!k_Sh8h*>R~ z=}f{BWO55MTCap1aZF$C8IitX-#jRKEs5J*J3g?IYoTlfd+!M#c>HpPGVFOFn>zON z%kfsK@;FdonEtS!0laIGVfJR2+V!V<2ZbT1{N*%%<`>oHdm@!_32o0Ny<=D9M9e^m zW>b0yZxivcqY#UeFAZm?6wo1F#ha^2CC`Kj^A&nN`%cRLXUzBl8qA7M;0%wKw=T?! zVO1>ubY%o3!~@z|oBzB$>pyzDBW zJDm%%J6~6OemXaQt0zI;OfjG~s*|zDwq2=&ZEhG^u_tM6_zF18bcjj=9>Htwd-<;r zs(u@0_v9OX7l{jsb+J{MF}6_N>+6;VhBRNb^yS!CljBvol)eC%Jx3PMvEZuG!P$hA zAE94m2De3L$Z*HkZgjozJr{Y_nT4=Lb?%)2R`Bfcjuf?O30(->apAw|Rj=FEm*P63 zAYJnsqz%(*3r^B~bi%(qN-{k||DC7P{n?}LyX8u$k=CH}mE*#DE(ptyb*}1)*#b~w ze7fY?%vPD?b+ZbhLJ#oyH-KWA0853GSP>kpcp!zJ6O-L>Pt?EIA=)-^n0MrQi}I?j zSV74hQOT4bPDjkw->Y;bMiX8+?k-=;hbi8YjifE>fTIVq*60e&VQGA804q=59t(H+ zGjO9fvqwg^XRtAL9TUv?FB||~=HeZRjv1QdmO_&K_`>h)zmwQIb`l*7DZj!Jp7XZ? zrJSIc0PSjYlw3*ijXeP|WS+Y{7iyY6VqSe{ZV5k@1_-8%HuH^G&Si6b@@yn|JI@J9 zAYoqE6vwQ9fqeCb7yh@1Nux81?3e8FSEa|U$Gik5ev~}{gmUGSrjUz1W_;bAoiG5qYR;Q56bs$zLPw<0DmuRc1|2ErNaeZdc^;# zv;*~^F)qvap&pwdH7X=DnicLyJIx}evuDgEO|-N=bN5R%<5M;V09GXEdsM>)yC;D% z53d$aQ{ar_WONem`27NJ)pp4_eRf1%i8H%WTc{;a-vU%)^T(lk&fP6^_9KM)&sFjV zw*giRNphz}x#$?jJ4En|tl1iZ@EeU9=ZMk@H|mvw+S;OWsf`Vz?*=|yn|E+k9?)Bo4Y{Q_xSZqZcQ-m%;z0bNaxA)-*;fG)*>I+I2?vAF~pNYZMWB4 z<%OakP$*ZGFrh>TU8{=c)Y6D?x%5;+H|2LuCrtKH=P+xah7WShOfiFNa-5G?cv&9! z&Y4&Y1UJ&Yfys97x|735m-%H0a{Ml-yGtTE*yRh?kliKY=K>+&uv30BE_7PV*^#X^ z+j-Y}_hh<6RqTx;pYe^T$*gk?EMqtvsK?f%SB90cNBZ3kg7(zTTT39TLVTii_izHk zz!eGP68|fo$hcN{<6?Xpgy&hJI%#w5FRq;RnzQ$kYojK|HlM{sHJ6x*Lc95te&(1o zNjfa<%_(X$D&4#7k+3Rx9SIZliXWDPa5yx^VKT*xg|o#k#nrdD$F5$Kei$KhR%=JU zU(Y0lGeg|i*;$8L7%e>>YVlcPGW_y14kW7JLK+rFMkeHlugT!GsX2`0NDI!w3dPm9 zWJYhfn9}v&X`cRxo&B`P!KwAH#q4Poj?M?k{B=Q*C>EQpQrap{^XG)pc=|JCtcw?x zlwZ~4fOp@{jXBN^UdTvW!R>h_H}~-i5{LQbCt6|UUkUcM#@A(z(d1=pdMuZH1Y%aR zCJu=1)f(|s#IR8|)*cpK6J_TNoc`Eu&~iAL_cS136$1fP&@Oj(8tfiBe$2N|$vd&e zM#5h8zI0`jzXp;Wf2C!yK^@m(22zSB4ZB?c?F)hL;> zU<2*Q7#GUB_?7NI<4>ZlT00F7rZp<%&?F4GJcmDTsAE4=;Z->^j{a935Lg!_lyd!` zeS@nF&k6laG{v-scVM%@pL4d>TYH@>DTRz@=aNGkaw~RueS0eK^+nSRM?GyT&|4)i zc#3y~`YxJ1n|{GUMkMg?`qF$owSwX}aT!k*DES`5ujqRXovh}Yt2P7??GHbOZG3$B z@))g7-KE{vOxQ-m{wN93`9{Qd?eOxYFqnc{YMqGmg6JfjQ>=qAJX5dBXC=m~HLF?h zxXrt=kZ<}@^-hT*=bR49X=Oh+Yro9a<1%q`X0jQ>;f%TFo3b1~thRD*pr?9Jg7fwc zyLpQCW}K)rA>HYnobL049KhJly6Dxxk=!cYk+wrh+$2iRNZ6;@m`m7@kd}Y8h#idkhP+t8w6}Lic5?#pOcwG>Z!}Dd7Wp0s5&5la1%kCWoM{3GJ7%hrWzv zn5m@?ok5{5MJ~4Ai6O(C63S8E0aigDdG}@=ty>UYvDvPR=lFwot{2lNGHf&9#kDca zX4eTC)yU{owlTurF6dAb>Y7;?%LiDY{4wMA@Ej4k{0|r%@vi_o+GMQtA?3~&fK?Sb z@Od4hc3OG9D9AWl>f*oF=3y~A?}(biw?>K4Pbmdg1I!0Z>`gX5(oQVTbPs3gDTz#l z)>Op0q~45W1#IqB9HZYJMi&ZzXFtYqm%)Tr1J?O7+|qpbhi#@tuVGj8Q}VmjG56uR zEOX8tpH}I)d5lA#$ltL!?uNQSLP(?f=~hm?`OdfJMigwE+6#KKaOtOZt|;MESUI+; zWWGd&-Wr^oI;f;3KwzJ;EDAeZA)mjAGTO6*{A%+;&7cSU+ne>)yNSw8vF{XI`!~S> zjnIxqCwNAJO?KlRw%1BXrA8!v)tt=UKSuBql^sAW_Q$xi6gCNIzX3Tpt!^u-q=FB& zMWms_-O1#SNBdzbv68t;3SS<&zlsaKpTt94Dj)4N6DblAsjn;03V7rVdcqRg4U$R_vaVbAEwn@OFPtPs4^TolTX=o>q=^x~s zuUwos5^9m(j^5^mk;;h`Dx3j?bRSq=!tc5=BlNKmz$|rp)ez4R7i|n%bQ%_Wv_h;l znu}$s=-7bF%V-mfpq!l-)OFs}>8X~+J6AOJi@5&9-Y>^4#ky47wmGc7^~RSAK`zTy z_SZeH;t`{&TwA7ZrbppzHRKs-yPQe=sx<16)8KPg5Mzj|3vD#lmJXfzV%7w79&c_y z!cmGgtYw@H5NQ`!qIr6y9EXWt&5Dyg_;fsjkk@8EdVR)OFW{jPqFGB{UWu4p^|HK* zG)>I(1#x21%7a53+M4uV&Q3jmEym~d?BNa*wY4Z%NQi=r+jF3~*V zwXPVKoCjO%otBREl7Lu8JF~X=FZ})4g;A4pfp=63hb0AKg?+Sb=(SyIv6m4+5h+Cb z>e3gi*TU3=NU*WZQQKS-?+PjzH@P3lv^?a-+k-dw=?$faMosALwlAUyem2b$VC{DEe1K~id~g_* zMf+&NpL%~Fb2Ile--zOH_uZY?2|ThUvXc?PmYn(-{>Z)y5UiK4n{^vpe`1|jrpi1F zlDP~6HByBBL-1;`E=p=wC0>}$5x%~VRGm=4BJy=UDF$uz-_zpEV#*`~W{0EJMC$YR z1YKR~6Ty6=C5fm@OiMpScZHdOFcmIa{5`e4;3nT5Rw0>4SC#Ff4GrjZMhxwnzP-o$)2wX?g7(zP%;dKQA-y+J_Xgj`tWN)S|6ZH>7w|3*k%R4LzUo<{tStm|=ANd~Jubaq{r||V zCVVe0`GD2_ADsSo(Eq7s{b#M||E0>_%JpAZWzg^2pZ9o1{@FY7Skn-6?8iO;%ysI= zZWIsr){oskjsEB6|Kx_P87NP|SDgxPXFBR_B$(49!@a%`YtU&scq|R_z0%CuK5`g> z3NV6-p%$A|Smrp2&dId?=gN`bi!m;K8CdlGy)^Lj*6y;f%GF{bi0K1rKzHIt%O4dezO zd12~iRg41VrY0)}Y_}AV^sJLg5L4J6Zm%cT7+{2_Of0899eB!4Du9-Eq&iJ$SZyO4jzTERunK-j_Ky zA8OzxiuudIrmX78b<7aw@k5iablC_lbWe+*k}do?*F{Lh^4P^Ok*SD&)ulIGNL{L^OEGL)owGmT%aUPt1W#eeg_4qj>v zT0EuT=gR2LI~1=CMHViEpz6KaUSzp5+EAQVj&;gY4XOxaioY?8_zSHy3MB%}(q4(cAr3^Y-Q4*J1#{G_j^_ zRb0DVt%vo_QXb=;%KoQUh=v~mX-g8a2IbujGODJJki$LaZ#cXXmk>{8Rt4SN>pV!4 zHP~WqEM5|jW6WeGT=F;{&!i4hmWA0FYdCNmI*LJFp!B`HE6>(!@AOem&EDAUja}Ko zgAELNUCQeMAdye1Mg2Gv&I}7$gU&g=Lms}p_5dgmaX^}kS$)o_@*7Hhit+8L*+F&9 z2!jrroX22+Xg#N)u<6X^_vphM``*Ze^9a(%i=wucULwr8qHwa$ic}n`ra;Hp*m&%! zS<*VhfR8R9@0kXyK|dH;t19_x+xE8ODF!>{Vt0 zVn6?oPz_<992&Cz3c1w-0cG(&qGPKfc?ey!1uCU-O=c4l_*dFl;OCj1R$r!+pUP?n zm~Hz5pG2;sZSdvo@j?D@*!*QNBst~$B`Bx9B@#cP;=2-u*g8l5^dV#u^IK&}jigM2K#E2nTo~R0yG?x%;pyF&f46gi$({=%t6loxVEhb# z^-*?|l26JQlJG$~qONdP?Zgn7>@pAHK08v=**u1JDdpgk{Iz)IqpuW#JYmiNtF3X3 ziFQLd)>RF;s$#$9adyr$R7%3<+ElArTP?$+KPo*@nkn5%;2B}C%Mr@7X59!-1fMl& z4%a@J>Cg5L?RO~SEW{0@f7q?`Xr-G6ShA4>%a-V|jf8#};sk4GLjf}oC%@c0+TvdD zurrfdrFZu;Wv~Iu-kek#754c9nk@|u!%H$BR$UK9zHf+kucmo-xsN9SW>rn25sEJz zb`QscPrCvqw2}5l|9R&};BFsamV}W)@1n=cOPbBt_i&@!D(1edG(pJbCB_HtQO@JH zrY1NQcMTRs&I8M5RdY0vfM7CPasph>>^K7)ue$`^ee=63OT2yp))jes zQa|kQ>qwe9&&WoR#>jBc?W?s+mh>#fPI`{jmNl zCbz`zsTBTIGsQ&>k7SZ_NFmLpEPKs-n8spYmK^l=$^u_v>CNrP(Nwdwg~cwRpZwd2 zmqfAB{ek({+lWNTJ<{-K5=-Ik2+M^!+y}}Wt7f*!GI!ES z7}uExSW)JcA9}lc1osMHrCM3*wh^2mOM7rIVT(Sk>{27)YQP2ml*dkwjZpO6^SEyd zRE24zFumiQl~}calDx3<(-89&N`n(Ez;#WNtBOv;^;VT>9}=$&^v#hPV-C9`aTMtH zrZCoaM}C53eK%^x2UeAgj?ED?XN^IKZ|`n0ALi?4EAv~JCvI*;#b?QdG@mj1?tVb` z5w;mL;r)0-dRhYS$n1^lN_}9Iu)ZgbyCNluD zMLM*4(~3FzhkIswYJ$P%F8wq9V@!btEA#VG0A^da#`+IT?+N>U{M1-u(sB3g+LhId(X>@hr7VPcQG+y%vA^apptYYOxrXu`=V5 zuTln{sW``>6e41gdP|JT@k8s5EPOX4QzliXDrpPuVJ^VInkrv*NHy2UEX((eH*9uf zvoXN44x5)+b2pxy9SL@*Wm<5giSI~FF0=Bia%=1=gh+h?&3^vbt%Dj;sA{cLPRkyB z0e$@kHD0fK&yKXY&<{Rk0f}(ZtZ;uPnnaPx9AEl?IYvt?kIKEJu+Xsi3KG8~HF||1 zD#i*=H~8dFZi$y@L~2M30A>vgRDGsuNT=7O0^Z$YvU>gg-rB;2%t6Gi8T4Y81|N;j z-E{hJG%t)=hPbKY9J`u))-e~x#D`1`Xb2yJy(+E54ygth&8aO|Yq~wqw>9r09 zD~AU=lO5wyo(Rj+ojcX!zutYfdytMm!-PX(X%iidI*eg%6b|Rm5Izfuh?NJ9cI%V3XBx7&vIe0MUm@XSvf9CHH^hl}`8rbG?O9+UEbZ|Ikz}?I!aSBsU*lT(No^jB zTIg(tNbTaDxf?xJ#I&;wPZ7H{dZ||euZqeb0 zFKb3DXmq_cwEl}?9HKm#CM~op^}`mv{+nzTC_Y96CTHOx;`uB;dJ(tDeEL?-@4J^o zcP#sL)eU@=ZlR_VkqFt`pTG1#i^@oX!4?kGqcMMn^<>lpd zn&(w;3_A+tAZgp?peMG)(&BT|HCa^3=>ziyKQLDn@RqaPeI-@Knv}y*@zC~+b6!x| z#EvO(i3MJc9V5QcVqtxih0Ce}e)jhTZTEycUA{>P?Huqy`nnZ%juF?`?_E)H20ocZ z52uu6Om>T$$xsOKx}QUn5lS&oIe3pQ?NuU8vI|+1sZKEu@hJ$y9}`@-lPHF4QsBh@ zIyu3)jTy3b&|}ZXxY<4?FopK=!8mJzBV0zzL zv27K_@@@Sn%+F<#9p={rntpFPP`6Cocv*S$huynCFdJ$x9k zeb{{7{c8i`2`l-tT~;-btguAMZ7o_@RcQQAS3g+*XUAo8TcE4>eK1u%``>8cqfLH} ztqAip7BZCu&Qd0mTG*S@#oJQC4pIfB!@gX`644oQ3+RI*K%lzC+G1`a`CZIdu`}DG zY(Y&sF2Nk4qo|FNIM^VCLE{72|6$ulAh{1{1qeLCsMG$49TdWd(ErS+)RMyclcS4C(BnVAjR@*#KbVnYlzu5 zT6WYqe+BRDLd2H;SY9uqt;KOEkNFVJ8=SC-R&Fv~oTUV~-wJfk%v7mcA$~_777+c) zYt|y}<&y)f_)PD#po&K-;m3@E6QCfj1%cIVGRwx4pr+rrHXv^PxVGxg>O46HwEe)s zJd-BpgZ*lSg60<^iEEzwC8hf`2L0a!{dz7xs`KP-<3$E{D+BedHTg_Ndc)fWMnKKM zgZ2E_ALD(@F$6)52kWn}LY+$nN!eK~eJq^EGB)N6mbnj}%##CR(C2>K?5D$O%+;c% zl;r{r(49MQ;@$8ny^*@A6>uv{UcXn;_Eb;r^Zme;kG-M+RB458*r(e#9ID6o8j+=f ztQaicq#11_Mx#tw1{L{7rc{fk&yg+&SlHp~iXs?8Kk_PfTEJXz{#2&7-fI&tsMB!n z9O0VVPZqE;Y*GzX2)I?4AN}o}ck@=&t((2uU)xj<7M^ceoZ)ncTHc~(Vebn^vNs#U@3B<3G7G3p)K>HHqZVrd4EX6 ztUCPfd|b6@4Z^y-AQ)_TPI*>vc0!jQ4+cXdzL_>S4A(-44Tp`l%G1yLUr194U!Kd5 zyoQuHJ%|6RRb@XnOJxEe;g-?UGQs2wLO1OqJ}heJ!M>p|X<)zv->z2aQid=BMY2k= z6o^~i2FZmLr_n4NUewK}p8SX5k6mDMfHqQ_?a9O8mD^Vpa1@~)uepK_AZR{%4 zlS`q%Zg%HMosH1X1m(v$zcB#yu|sEwR$-ClbxN+eHPh%i^Dn-(3t&HaNn~C_V(ias z*Xlx>GZ+0^WIbLdDUG6M8_&k94*!fJ8`~VY7MK~>D$mWbFpX;J z5LcP9#N_Jgpe4f*-3o3rm4tAM8W9~g&y+#UFr|n*g-b>#wZc5ZFFi4_TKo)#xuY)JF2K7@f zt2WI+s((qbT^&D;J~Z+1*>#5{*Q*&|z56ALffz)0tP4F;GBpb{FpidiaF-oShoLr` zKGi#Cez=}WOo%c;S5$`^U&z7`PELDa7sT_*Hp<{>8)s&GB|AMT<8o{Eg5HOr3~k5b z-CFDEfF`{?cL{>;`A=!{od7Go1vY*Dg>Si<3A4@vM$vpDZZqnsGH%wg?55{8-2Wt8 z*z`ZK8KysDgH{S0auRH4Npem5EcjniM+1VlM9Y{1xoHJ?J63rj!a7eE`=izQzddd} zlt)LUT}es_wLbtPS=sLA8rI+s(&)MlW*K!0695=rj^}NRo#{$P@g$n|yM4QxFx%#D z4nQBv8uTI|b$Kvp-%r47yg5dH2E+S?=1v;FpiNgv>g@5C$^O!|r+{xN0>}nu5%$@a z`RA?~&Y9ppBC9}}_On8f<>eRq1mbd4#LT#fsgs*f_GTt}7& zCTRnm->1Jjc+Kw7aK-CI&lJ;Dob%O=H>%82fW3tKyGP%h87RsS*P zr^e6FLg>?QIi;EnU%Rp8Hpwp&az;Z*Uvhr?cdb?c+OXvY>wsGsSnR-p{@VPR7?%JL zsYmY}W&TZsXQjtbbUH?MqH}oxKs91bPdw|4IMR1_SD?x@4olCjw=G=E(C+r>&y@;J zlre`z#=0~y{i8Ayv{wRUs2@FlFkwxY5pre>L691ca6dAThQ5#%8`P zUaj}8QFV62x}r*SjN1EinhR)rBcil*;u$%snRq2z^@W6e_unzg`n)3<8XE|{%HJf3 z259_x<>=}}m#5ZzXG$aRJMq%IZXE0|!ZTbJKgdiv2F{|tL zMg=7y`-j;M@70w<#sZ8RWFx>Bw7xcim>;o#j@qPU12=jyDBQ{x&`!;8$+QnjZi)|D z=lcPQ%dn7{j}JR^3qA4_b0z$_JgUSN4>IiMVCrU&1iUS zQT{qBGBkm&))z_E8Q!{XD&`#U2>>{j?9O^ur(2n3yiC)F+$j)snV+;AJ5O%|5-gZl z%@T`AEepwjde-B%zzMK=bX}!lwWY?!#(Ge{g?ZVop4cd~qOCJn;9&~>BG@m_d7XRl z!rB6>5h9&TjmNILpACF$fAt4aV9lc-D(us1mJ^WAg7o*8LfLVN%br>bw4x@}c9+dx zz>V5yLZG)8-#AlM#E4nfmk~ScE&%|$qllNq(^(O3MPbKE3eFAP#J+C$k!3jKhD(>( zmlb+ZLzbH|NrWlz3IAvhWu%?)oYInV<&9=6OPx`+;zZwS5?!dz5rm7fzGaQkC-H+D^>HVXkO5#q7TC&9v`)gKJxz@Ch`bP1%tq{>(p@4zX@x+`F7H%4`8+ZF%`=d2P0=14y?53D764cYGARvUFC z2h-9#UC;pB(F^z)^?_mmik7VF8rvYB-M&rUq`0DsqyXt3#2~?3IpRDc=5mez*%HwC z%#D!BFjjN0pzR+&sYOQV^Rhl-$|?d(u-ZsDl|ro@?%4H7|F1XLasBrP{=a)nm<3R? zUcLB|9NO@&%CpLAKh*BXAJ2dK?IVAP{#y&XQ+N&e!wwutLjEjO9RbDs|LEq&l0K%V zU0I)_vbH;Dn6P`=)(SMGV$XBZAZpxpTWW-CwaB}+wCUTaxvaqCHdnUvJW*$CO+$(JUi}eH?@kWd)oG{iLfOZH5lC08AgYELzSpjzybV(u+{XM zO*U9njh(o(OO@ukcV?Th1#W^A-=Mibl+4_^WI~yH5lJc4QsWSM#hJIV(DNVhppJq4 zK-_7lh1WKEfgU;IffhN~6G;^<`?$5&UXQAR+kjHHRfU5HzEJ7Kzl4Lsy@j6@p{W-H zxDQWGksdWysN~hl49DioI|Udu&#Ek{5Gd2Rv5bH-J9$#Bz`xn>I^cCwf@W8Ji$F@n z?}=4s+{%~oSTB_yDXfO%?qpQ*DY$o-HcaeWPEhW4_B}oepl2^HoSc17=}>Az6gTE( zDv;LY+Fv%%CS-jRzQ3-1SaE_-tPJ@~5NHRPq_+Vzd&%Whrv#Ais;El(p*0B6(0 zmV5eE_11GLKYs=vq6X)b^gMPrBrkT*iVHz1zt<$l#MW8KUBX$m%m!68%&mJ#mePh4 zdMnam#%;by{UPa}P&o8|%b`153q!YJ3nhK37al^BW_|utnKfq)^1v@343yfjG?}wm z!QDmpzx)>sU@MWEyK?RMnG#ze(61`Y_RsUnT5~_NClX@@n(~r~xt7YX2G2XsPN42r znqRB$h1{=TV?P(+u=07|nw4=z1xt9QYT}g1i-0DtRLytoS|_GB0J2IT_3B5HhHE7W zs(OOHp2=O6%l1vSFri~zi2itXYT&@bc%qJC?I2uXX8=h|B3zIcq}1GZ4>a&Jx6NV8Jt9-wF{|2`d#PlF~(j;^)R%lAwdP=j7rfXWb#u&tYtS7 z5R&H5y|5mtZ|P!9b49gILg&>`JW^-bp~$=2eaqetM_iXvD|@OhtHMmGq4E7io998G zv4njR&(NWdLm_q0YMgGm7YB#5tDY$%bXL&jE&A`L5rgMWG1%?}ZZezI;VVsi8*83N zNw&{-4bfMvB+F@VZ&m>Gk4^rlWRnJH3rp986a=oe^wZwfv(i->)$)uD%xgRfoLZ!H zc`5o+eVsUWhNHgwCzntIPpqIfAvWyojHZ27f6VILHWOKfXEDhQCUGd?E#S95o$?$W zo!C1xL3l%uoW-;>MR_UY56|Tg{xsx`HOb^0C$2XcF^5GtB#F_pcF&=OS>1j~Q1uP=92dIia}8m_T3-At}$kLEH86e^_SJov&cO zrC*>y(=FPC3}gUGQVf~1Ybl=GW-8HQS%|KAp(m@O0K1^DGT#)m_uIH`lb3crg^V=r z3C556hZeapPc+#z1;fF8TbD(VFE`DLaJ&%jnM{45InAYDvSDwsnWu2Nr5*9jm_y?Q zLnZ=64UL2ES(fh=+375-^Q2y9bGlf$che|ms?vGg@a@V?a>yd^94%1Mw`xxr+ELRi zbBBe$XPuu@p%D9a_&BUAz}$#3`)5Jug~DdbMx8>BbRhSfcq#ouDOz(KvTQ?E$7V6XLgPo)fanc3R9LSe!VDLtcK3m-koIXtL8sCQVB_YO zC0>*JP@Mj;KQ?h*h30e!Gq}4^YBr}k!qy-eHzX>f&(G$s|E-M_^lm!Oo6`)MP_rcZ zSr!(iKnRXA$s4KMAne>gpVV4;wzY#~w3q2s)lj1*_KefdPsv~@Y;pcJh3E#)fUs4E z*~TGI+nXq%-6vb%h&G07XI@0@PMM7W0?p)L1Uwz22N@M6Xe>y@QybotZlcfZVV{w6 zjX_hJ+%th)k`d%v00H% zz91Dfj;K)OkZL~$ZlsdLVG`%F?LEhP+a@ip&a{@E935M~#oya%F4#`uxcg7)^>sj0 zb<{B58Z=>0V*Bj-=d5MjE#zX(jHw$u+n7Mt$^ov1Lk~4}hkLSENzT?glvX~p{Nqe{z zxT-6c=eDOgpJ}jS>__{}uT3$nF6j?f&_w@0QnWUHQJL7DM=jgQBxh<4ZGGLj7ZS7a zVJk$WM}Fw3Qd{e!_;LJPK{LmqiR!Tj#^6Rn%-yBVQ%Y%ngm+npIHn^+VY%(uJJ~p@ zGS+j!=^m>KN6E3OmyMJz#%OJ9nyu>DlV`(GWUv#u&#A3I-+2bI1d$=LV9>DBP|C_M z@Ru!>nV>n?%N(UA;H8)c6e&4s8LiMz!nk6yfv2f6*UbUzya&l%9ib~+9cSW5p>jc6 zkP>WA9R2~ynRg`Ex_7*;oFk)kQDrg?MZi&YxbQXSlXEd{G%36C2~y-Q zB6|>?@iG|2gL8B0pT4zKL8KUxZkZD30A)2yMGw?*aNKZTJW+m>a;*a8HZ@YV}WnUJ8iQ8ceVQJ)>euBbuW8YjA&)5$m(S=0KdR` z(jbFQCf?_&(lGZL9kVHQG|K&!);YPA!e8)0$+zkx^=_~%4dgRUK4Rl* zr8WMc?>7aASkOb@zBC?|bR3dzXCJ5TsWa~?WR>ti;~cG^N6^Klc1=+~whezI8(#7U_zWJL9Iqk$KhM<+BS!G#C`n2LO>8^XqPADxlR)(OmpH{cf zNeEN%9m!R3c$}+tXzy&&Dfv{@c+P}Yi5RkzbF2wwmQpD9*E`}^0e*1%w>{k=mZkep zjZbW0ATRc}u69`q8_zu72TmiZX3+{maZ*3<*qcyREQlNW3{EMDl2aGzePw*=>t!jV zeFRb~9p2;Zu*wyhKC5uGT=J=YG*F}3`t8@t3}3poy;Y(E*w3o(=^s__a|LPk1&Oe` zDV0%^mz0%!J+v3H08j5#>j*b3bOv4Q{pwN`rBb2tw6WGc4{E^1J!fF~g==J^&1_Zy zhMIh?8dzHsl+Z~l1}7euklCzwv6}qmlV)hn^s&R)b34C0ImJE^bmcla#|Cw;u=o@w zX6$@`K=0zfF*fqy5F=7ExVZL)Z5(s(a+YwL?VN7$DX?GVIjoi)h*$>edSzU&>TXJE zH(N1bx$MZB*qOLobcY=x$lDOUNT8Z*zSviu(C4ZuIT2me&=>Z)KsNO`l{pb?Qs*tc zxkbSGFI^!s#nzMh6bMa>xs6c;SPLB-DH%nEg#S7j@Q` z*ZKDPn=v!cb$aw#pnGPNu|Jn%Lg8NeyjJY$l`Z98w+P(!ZLzVPh9xVdNO75^aM($fD{>qR!G3O)ujp!dAdOY}gWQlo%?5VaCR$A5<#licZBd=8 zS?E?2*e{np^=H%EIu1HHE<f6|AvS`sv*sx1Ww(r*$lHGJ%dEm2a_crOgZ*%N z?|dqjYg+;XO1xcShc9HcZcezv3sj|MjE}q_U6nw}hu|vYYnTs{&O_FS8Ikf@Tnp9i zu=jzsp7xVBv2LRe7mat|tpB2^6MAdrZudBrAlnoVu>ZAM$?=I9bXXy$WV6c2-c~P1 zhkTbDbel6vW-X|mINnSgr0EJ8EVKjoor2EoO&xtOm<(Kx6|9<3!nX|f+4jKC1LQ`n zm?v|#&|-qKPDWIn<4Zdf)j!7&23N$8-J41%_)ZOAH{Fl{9t@HB#-{Vz&b+-L{zZPZ z?XcC7EMDg%{SS;;tP5U6=eBI5Z(`g$)F8vKKqa#NdPqAp+kNp2k>lu2jrL$*@yfJ= z!O-Vni47rU<W=QWpIpV)fdwkL>D)#-*!!FhO37OP8 z2{N$fMBiv&Ek=*kB*CZec=Zd3D0*?w_2?jS42^4~Z;>kv!=3s~xggk``--V4=5bGU zk%2;;&Q)2gPZ~bx&#ElZecjM22a+d5WnZp-tFzrsXr2T&T2gaaNUQ$P*f*t0uGw3c*FVr;uRNc(%zri2siogak#HJN`YDX4xmkie` zHlHfOPy+Y9@Z#&(kDXVYD^Bo`Dbrb0z>=}mxnW&JcyT%OGfk`S*)Zhf&B>{PNT4e7 zB4ppBIv%aMD$8uYyTW*An0I%@ux{niaHUaiWi_sE!~Zjpw?`}1g$xDaX#7o~8>e9x z2)PD^HP)PBq4Es#lW@7Y4}KWu;it!Zzfe%l3|ELUr`TYF4is1uJnR&5*{>&zMz3$G z%46AJ-KW0U$HqhG6V66uo4qm`9eJtze;fb5V;ifvuW1&FI`D!s2K(y7LFRHy<~7W>EG?v=uhv@qe-R-a$=v?cS(9Sdk({S|}pYrS~ongLIJIl?a4D zKzaw2-bH$c(7QkYX+fnaJ@k&!dk=wx@`dN!d!PNC?|IMp=ggdM&YYP{Cb{R%y~?$& z_Pc&-CFb|&7EG%Orn4m*_g2Pjo3;zVDCwsE#;7AMbNEZ%Ut=$)tJEUsTmMcda-&z5*pnrHe$voEM-g6)>J<_C+bu%;2Cc2+Zr zt9hbIP9CeqnTPW_euO(4&$;x5eJ$)$SbPVOTReT0`WX2_$$=WP69nz*g(n)CC!G8n zz17fzOgOiZU5@DA4cIl|hF4gPN<^9RjRy+KR3%T17^)svuy_WQdI7xvU9d(?@6x(mCJu_P4TU^0wi$APHnKH$OyGUa=-|)|a1Wq0 z6N>)6-bVgTNrQCq4O6n7Coc9^cv0ARRY`VYMxdqoo4nXM?TcrRU;>pMH7=;`lHCm% zZ0{Ykdn7hnvaTWyxu9(Qx3He?y_<@FoEbi4<-0^0b=i(tqyDHy7>o9EJfomOu|BzN z5^ry#^yT?TKus3itD*X)`Geo4UmdXPOZ`Alhhw|kG2 z_bi_Jotm!{mR?}7l+H|gJ7ceJ%8yyRbWUwS0c)SM-mMmMQ;wDea2`WQ#?ZX=}#VMPB48@ zQac%&F?uSP(=(;G^D=z*KDsgSQvDl`>3R~i`8*dYPnu#v-p7XevE@0^{L}ZZv(q5YjsBX==q(4S; zdro9!v4|Rr^T|d`!sXfwMs4)pR$;EDd=_%JDqXKU<{2!!Eiy{gGO3gF*hM^hu&~ec z1zp*T;df%`Ks@C?K4u@hCW|dFZ=~V-4vc%66aPm+Wo;M02G4)W9~T59=AwcMV#Lde#qC6Lx>CFx&TAdcADH2<0unxUW zf4B@v5Kl{5*a*2mG#q=5PDgB=*B2jYw=%%U(BjOSM{Vxwc$=^!x|}*t1a~n7xMoZY zZgEb@f`~MH3`X;KPTZ?cmC9^)O+1yulVEN%+Ni9rL*nu?IX~1^O9vv`~d-*@ehj=ZB zA`ThDgA~$FjyYE^)4exCtMT7YBCxOfc>@p(`tgsPssGlM>$k2u{P3ImoY1Byw1vq6 zT24_AeizU2E`g{liXB$4aWC`!*R4^+uYm-=4JuQ;rUC{Y3>z(ABoqbUINlUnh&4k4 z5&$UsMcDW+i1GS2U;-M{jZ>O|-pUm%|4hgl@mBMtH3#T;U=HiPG*1o_f9)qG`cIt^ z@&nqvjR-HIt8m1o-xY`Tunw z?VYouj+Ze03jXvj0{bQJ>rx-G zpG(IwLVPIj(#FrxgY~8NOwZ}BpECb^;r{upMX%%MGwIhNxpBkvIn&+q8;jRWrTsft zjo<7K2lMVu&)>S1%#GiFVa0#neb|lv_+D(t<1e`no{Ta>Cv5ojiN8cW)O}O#z4-N0 zT0S5tPVaXdt$(N&`|$T@zP^_wIlb8-aQzhG}^|N9S>%s|>= zFK4<&y2n>G^{;q#R7VhyP))Ug?9WLl)z! z6M&Hob=p;KQokeYW@4HDjYJ0yW>twLl;4a<@O!YXIL>Jl%&2h4NhOjK2}v>s*CUJ? zZaauKTulD;&$9VLGZ?6zfQu4NUrQP)38SaKaE01IC-XJ@2{aG}^fUfZH~)FDyo<6K z3(HDU!XX=47Dj-2-<=6-NX#O9tJ*?i5*-DPRPKJ}kdO(jA9drfmB!jGYy$nR1{1Xu zN8F7sP03l+H1hMKoF&D6T6u5is*~|(s>Yp_uxtMqrBvzXg_qX<40=qwwTvu$gsOH- z=SWGG|D{9GdHI>cYNN3DkS@@4P`W3BJBAz6>gjfWEUST2s!zQBf`;JV43d73dAea( ztzSUklR~Q3Oi3v+xJiLTm2%!8?{!FxSr9pAUAa2%yJ8yDhqo$(DJy>g(mI+iTpdi( zQvd2G{bftJEldg!g};!AFPci!Ea2gZ_%7`kURo2M*ZZp`JvOMXhULA>Of1j8kx}J= zZ%Ig!W}N^uTP;67nA_Gx?}3|%^oEO-bx%xwIeDqfutN7q(ugw{)qbn}xqcL6e3WR~ zR7bOJg;f{DYC3*d*PhT5xZ=>vp;y{;}LlMr#O@u{Fgg(JR} zc0cPQI(g;c^FU;D{O93kM{e8da@ALFR8Or&!Zh+%+&1l3{Uz(ar<`Rdq??@8sZyG9 zY@M0j&-}cK&~Flm{mXdjX%Eab+Xp5`<>z|eWhmJJH93GDr(QV=krIMpg^=A1} zsXcfx3r_2xsNG8N9dhRYxhila)|)Jdtruyr1BMe3!Lb+R5=RnWN;plq?1Ak0+MNwO zs~_i^692#i?GzV-y^RMXiq7h6UO^6qpY;eQBw0!=d4D|^J(0cpZ&%dCC}1*l?h~e-`{})rXU`WVw5+3O=Sx4&Ln3EU{EM?E;0OQ4Na#p=7+41pPbI+I zg#r_gsqPy!o#FhVC0M*|iC;X!OfbPKk7Pyx?nQbm#`cb8fccS3P1UY{3=T1GGAEF$%i$9`A)M zHG8tFD7=|Figqf*sXzUA2-%Xbe@OtacLv(2z&jmBhmeYBO4DRIjb4F+vmI9rUNs2a zoTVHsE263{a6m+F`0z_y){Aj8MJUBuJ(J15@rb5NjYV0`*|p1%dX7J5Pb3S|em2Z- zq%bz7g-#U*CXPep7yM1*4OI^%x_wTnPr1yN29JNS7li=#uFW_7t3Hl6M& zy4IA2Xq*@?;Nvy^&*B&-$%|w<&acdoud>zQLAMSXk4U*b1p*6)PP3Q~e;S{C&&h(96Yr3)5 znaln+@smt7ll!c_vK-7sS(eeKjC2z-#r36e3gZ)Cr`AAd?;*$0*;_)jRe$3si=u+z z#jzK5Xpk8mn;vaGwp+9X6HB&F?xJB}KC~ zu&#gE4Lv}{@zb!7RP z=eJ4lDFaK4U&i6>r?!eZ%jI$3RVmeyvllekv>o!!(D?kqb5guZ;P1%bF=00{-pe!^ zvWv-%TlqVBjlhhQRFpVpKt(X$mZDswn@`>F_PbChL@iuS|4VYzgtZ3$vq03Xet`~*P^P4!sFCeg3iURj!L70 z`t~|5K~N4IMzOkFb1(^q}{gkEvFD6(cA z56clnT$W153a)M9lvB8{DPydE%9=&8ZeaJ3KkmncCG71d`IsfO;@! z)C)S4UM5VpjydpiwlXES_^FCfQ+VmN|01p9d6yvEr*qP&Y(>^vG&+fTe5j<`kLx~O zLZKhQ3%{h_GkAz# zJ%G{7lqK*ZXJX_T)FbhVAbdNB%gF{oAQ`{jQdlXf`+9BJ_fZ zSV}*pt(;u1cPh4;WgRYlAF8bMZ+1wF#28$ zxX-CPn7N^qcwnbEsx6y4H>7X}kInz&|>o74fer|G56AMXb+BMA60L?wUO4NbN( z7qy<>SVB28A3JOCL%~a@8+@H((`e&$d}h(b_^ibEM--89UEODBx+I6|jB$#S0Q@eW z_Smatq*_l(-Qu|1zJHCC95m5v0Q%d9erd9nY)*5nzF;aQ^O#08?{$%psPwx>;D)zg zHS6eKY;J06%>Fx3>-ghe3#aJ2hi0EV+r`WL+atWP9YBEr7Mf+Fly|9ES47fQ2OC5X zv-}742?z~dmD9%{#H90rid75k%iO+}9bLTz1+&~qlLPUSNkBF3>f3P7JHlH>69LKz z!s;9G*%aU@dB5mK&E9~SXH_ChyrI#M1MkCH_od~h#-^2N6P6j|eM7BD9MN;L69WhI zp^g(((%({A1kbq}@K91H4S*C1#55$1EE;0;6$E*zUNPx&;8NpMIJ-QJDlDEOl6zwh zLuOAz#T)VjFG-&2eAaf%{T9M@aV~w9?zGLP!H6W=DSq1HDA+&kNLQ#yx829j*-IJYK*->D6wwSq?vjF;) zfht8}ApotJ#}jJ2{&uTC;Oskon)5!mFtN)bNmr&wlgK}Fu=-Hw34;b&?qxdW`UQ7s zGE#5~O#<0t8?-SnR4>D?2NFsZZC?sPXg!WT zETh}dE#tTfIb@By+kXf;pllNbA-tHvolJwjHdIC)n5~nGQ1+gEB1_+wfkArtCGdzkjk^rl&_@roA8iBO$uRfVs<`)>)0_XAa>m z)DbgO!d6Y%{@6DLXC=V4?YgnX+bP&Y!D;&kZ->XoWfYrjDOhROaHb@;F%bgWagx(D zqx;I_wq6zfrXx?&^V5B-+z?gu8TF&?``mP9L5NM_YYj=y zcPwcQUi*{lSI9t(@#SE!pl#jhu4_y1fZyN;KzkT+P9Z%zzN(fqr8cwlWL!tjD8^E- zXgEoqZ~_OPmuZC#JRjrAN7W<(pqmOS6=7UzOnrd zI|N!@u*D;I3-&g$C`7Pb4mJg3j~;064@=VYC2Hy!`LWhgIxikdHc)}z5Als`O;&?_ z^p*12#xDWuFvDyPspT*v1Kfot+VY{6W>G?AmB(Js7lE~TISv9gRk@J`2%RewZ!-e$ zHlvJ8h68zC*Px{cBNlPxa>|cN{7hKK&PFNmPNXeoGpJb9gto2j(2j9wqAP5-LTE2} z6n~$ZFaF#%Ola|zFDbey^ZIa^=e&OGOQ^72QZu-Y(q>7isPp$~Fo1|j}O-fc_#G1 zbvmpf9MogNa31bDl6n+XyP+sDXFoo*@lYfk@n|*rjU({xVM5ujeSxUQ+1mM6(%)7I z27sms^zB3s#g@G;^bA;xQ&IgXOkYaR82_&E@AJRgU3#*yXPXz2<~*Gd|ALdOR*jsq zwrl$fm=1Nv{`i1sq-SRu`MKRzoZNQIoSLPnAuFj@{sd_5^V*imicUpHM_WcPGV0om z)BA|P+mCgMs%Wwb;+({6q8jqT z!t2nQa(1Thiu{_?^zM80-dkxp)EjS~byFP~z1xu-V%%fG9~1p+8qCi+8+3n95g8+G zrN$WdXsK8v)%I8=jVu%kW|Q&Q4fe?wjlu`oGRZcFaA|68=+!PF<%?*U9=+68k=Udn z&UneyX#k2Wo|4v$Jh#|R4yQ>Ax{wI15{xlS11W<{_(W_>V;^NKNszEeT7q1;5=q*l z)^oc;3|J&h4Nhq5?mr|9s*TH#*5nnBi{@7SDuwyxeIa1*@eFWD#^8K1*(%8qn{}GoyI{k z0CMT>VsHxQCv$g6Le{|@{MVSeG$r0IfA=Ruo5EdS$sBJPK9Pp3FN?Bte#;NL>1fX) z3GA+~$fs1jv@yT;gJVyU1?wYnTi2dd<_EWG;Gb@Zm{tI4E>wj>M<7cicnN=;eTfrv zl|oqS?vLqh!m0y18{h*4CE&&Oq4on->E_R+qp7p_R^JoH;Pq-O3^@;av<0_$d}L`m z-9{lu*_NNIkyw@gwl`2u@k7DNYSo%0jeqq<(YzL^TU~>;K|1C z7mvg;F>AM(Xi+_((fv{D@##^1QvlR`ZWbH18QPiELHJx2O$s{rx-1YYSB?8>zQV3D zt88-jM*h{UA?qQ^-mu-u9zu-B2ZS_Ng>Al0vUi=40CvmrhX2pbYZGLX5c7SC@mtoU zliGar`q64mz+%+19=@tPFid6?2a9%rl5C~KJPU*{;yj}lXh4Xr*L-daRYim_HuZ?J z+aL~R)k5>@>WAfTyl2abf(SSWzcP{j)o81_y!gJ=_1U(;66u zs;4kqkAll8eQf6SsOzm7jm;=Xdj`pf>f9Qic7tVU@dd6_adnF>qiEtZ5w}cfdQ}--K2extKOaFo=s12nfuH`U=QO%o#r)cBkkpOq zz&hQhYf3U=N=Ww=(TJYYk8gg-UE3H#r)?=(K9F#_Gsq%cTX}935Rv>QYE8j)SBpXK zmMX->DDPELF~(DSvg}^EMIcXta#Yn5O_k9%ryWwDvsI#@@SOrDW^ z5&&-8si#14y2XJ4##xo?w*wwDbc(T|UU2vl3N=WNoGIWU<^Mq;m$-d{4w`|F6+`I& zr4m`kA>QwNSR{4rG_cC`%VE2E&cM6QZ&lK|JHle1-G&rNMAEo)Ufj_)d`8T0lzH#= z)(?xtR$3I?D29XkYhj`m0sDTNfA|S4cfK7i;tlXfv}M!IjAf^cGgoa%@%-0QV?D

<6NJ$5;A^!pPUG5q$Ei#1hgV;c!x4@Rf4HSvp;5X>si7KE4Xd?<{`eGyzZt zN6Jmuyd5f4XGK`qk*)BG98DdrVGbo-z5%M&dwQHpOG|UX@P$$ls`)W2vBZ;8WVooD z;3TMc@Rs2`1J;p!g-> zOWJoH!4mt#u|A&YeQ6i;{fkoRQ0lzvn&IEt7(piE}1IOGDK)EanBgIEU~?Wr|#ROcxdB1~Sp zWvV%LQ;6%Q;bP-hjLUS~?~gG;^QU1F3HGCIH4>uhjy!@@y-|sj(xafYX+ViaEcUfXm&amz-PmIy>{;rR3EKIO?!9~8&(v1?8mZFCKitt2hf$6-0%A% z2p^;Mz+S>#p5cq_G_)Bw@q?*3U4@daZ>%AoCycNZZXxft6+Lf|l@1ErHHwetw0cu| zY4w~lz4ZL4fc4e8=IDB~Tf_$a=g3tVNkk(27)veBR0B?!=0 zwWzY#w+SD%5csfQh>f>1d39KSWveFnTJT=D;p3m#d`j!7M1)zf7wBs^>op!g1Bt*h z^J#4^Tx+xrxC;xlHje~J9XvCN@*k|1=dO@Fuu$Uqb4D&nTTkR|n=O#~{ox?J=zM+N zH3AUItE2WcXMy5J=`UXbygbh--XAl(MT@}=h+@hLqe zwcjar9==kR}Y- z-@kv}31Nj*?hGL(x0v=%K*EF58xEXBrsm=C@i~Jp^Yp$K-qVrj+fj8bhrjfVQ^~y% zVEf>yjtYn|aoi%JPe{Gp1~{@qQre@M_f&R%`KgKqAN-Em(#>n2#)YY9jOow^TaxRZ%#J*|F4zwVGnxUV%AHGq)EB63IA$>R6i& zS^eZ8 zC9}v@dM?;@jxqRCkF5yeo57t*X@_Rs9!hKNo>3sIIidVXDJ&MLxENz0#gewMmm6}Z zp3tZ@AMcyp|FWR3#ToiFND43a_fpqU&95-FR!3~Vip1rT#c;v!;SV}Vx7m!GhjF!M zI4S`o<%&o^AnpfHB`saP%wI=ot=w#_z#9*F)`O?OJnV>flw&Zj^uBY^~C` z_cT)0k9EaB_>}>$M1 z7Y{N^g{iQj6BgRk+@{EeP>nHrL78lP9aps#N>3N&J)StkB`YRD@*BKtu)pnUH&y+? zMOy0Ao!(D7Lf1~vnY*uPMVFq zo%y`5lct`tO6(*^cw}KjLC>VqqhalHU8h3b9&Ay{0A?~8y<)+V7Cy5OYu|jx#@H2jST_xjX9L z!$PhJMAW&lf#;7q1g_igqQGkW)Krj;%^Er>d<|5*etzUox4i$uMVf22QOkWl_GNbF zEG12dj}HIeef{}jed)2>Jtv~;w&sJ76o>KnwZo2R<@{H_m!BLvePRj{|GOO76y&-~&SY5epQH-(f43!;yBBo+6wm*;4|600314@q@tz<3@?GYb>v{Q^ zKIEa`Pmbpu1=Wk)W}&<8lIj25@FR+1aYqYnxmOqMw9v~+x6zL7HE+YMJM!h8{~FX| zoqw1#=dC3qp;=D+LUv(&+1keTg4D+B!qxduXZOmIi_BNqaY=MME*ccTAz8*LFr!z4Nd>~=1tv{#94$qyM)1W*~v zG38csH(Wp9KV?MP9d>!ZZmg{)>@dRstwY+BrH^8Z{3YLY9%9imx*p}R1E54Ew#;T# z3c_7&`D84XhY+bK?C-%8T)q;lV3}?a20P>hwH0k(c}k`DpDQ~?9PvH&hQIwZ48Rs| zDW%uclfzKX$-@gsB~8i*zkBMFqA%^z9dtZ&jUxS9)J^882udgJ7VkFo<4BG5wK1Lk z#s@LwGg}x=EvMvB-YCc&{nYf`JCeB882Vz~~KcJZ~n|G{8AZMANL7#2e>Ae2#4T#Y{L}29<0x?{ z@wjK$OPKYsB`<$sVbR*~c7~wU%X7X&rh?w9FkYXf z%@>k}Nve_7>SN%59%^vx2Sto#WulvOx)trNQs9%wf$$u!lWU#6z32nPPpcF6sNcK9ZfqIG}jH_9HGqSN5gLGvjybML)q|;>+MDKH7Y% z#pw!9$4_jO`)9<%N&D>)Cb}2jANTvtXkZ`;%ob(1giCf>xy6jYM_7qiA01O22eHNl zrvi_X0q24FnNzC7#7N0Do2%b%^B+m}le(PxSC^R}@YXRl62EjreJem;3g7U<*()y} z^-mQ;+N;*(E~`q0^Do7)v2(-%{SX?|Cp`_PJ~`zEIsv?q1nZrrLM}3bx(YL}8YX6E z3Fc`M;cU}%D1-IdiB;{7o_ue{9~{4u7ig{F$}D^ucBomTvyU)7p%Y1dHrzYl`(Zq+ zCYaA=WV`6a)UTuBeGdFeYo1*^Q&GIxpY_c08Gm|G1+@UiSe7R>m1EJ1i)VVA&K8ti zRcNIHoC+!t4$%9c?>QNS3Ak*oEU;k`0KRlpC-K#+OBM2c|iw<8fT!J|u032gy7FIO`wnX%=hu4Kq1vqkZgw^F@z}%=#MIW^+@6;dAp%s&_}H zY9<_XZVvv4b~068I37_Geb(sjuv(K`^X6CTx8PDv?r5FLqZtoVn@W%Dq5fry?Qr4v zLB{y1JsoQ@KojqDm4;YYBN7kE#I#pfrsw#~DKJQnn02&EJ+V$a`E4gDPw#QwyxYA4 zMfH}FGBhnW9{7*Z4$bX&aDMC70k6nUpE~*>p0_mQOtIF3UgpYebUC+xj-$84N^m%n zQ3iuP`Jwe+C9wcy7Vm+({n{%=0q0;|brY_W)?))*UjeKiRTShWSCKeKBsm?G)02A z)4@h)B!vem29O#R%%M#q<7E2LRM8)?LfhanzvwNQ+H2m!wV42dMZ2x;sF%i1@CkM* zBS^%EUgfy*H~F0!?-TtwybLp1nq%@KHF+are~(9Vhx%%$>ba9Q7eE8T`im`CtA@W?CV1` z?^gBh)t{fW__!xFJOld`#Sb66xS-#`29Ve7ypofE9mz&PIE*!%>_y(-BZsc0^6dDb z$poDR+p)y<+rl!;aa2|XI^Vj&kk|t#?9TdmPNyrbTibgh8PC2gw}d7YPA7&~8Vfp# zwi?1R&t^5ON~F2HMRxT1?C(V=#T?9{nOhN_jQ##+BgMk>*{za_LYWp4&=$6({f$`; zmea!+YTmMU+v^zX4#s-dksnx-KCY9ws!RyFJ!iE8w)dl zj+aV99e_-`+eLMc^3?X*@F~y@WuUw+2J&NTaSx75INmiYy#%G{!)J}bN!84vS#3x~ zW&3lks;x8Mf~jV`?L5U_c_z-RF8vt2(oKK@74*6-DYUuG!%PzQ7>Sqnxq;lHHV_K!%@XUJ^4KWf- z0y1baZPD0%`*{Fa_+=wDXTFWDa%?E__?2c}I&t{?^UB4>a8d$UZ z4<|El?;>ZAPW>fc5#PxgUbtd(#}h^64K#jciHN83Mgwz9!;u*z@&Xl(h=if-AZ3KT zfs~(qmZoYHeai?nDsm=fcm*)A{-c}qcu^tM4x7A){;bL4*!B;<5cVS+3`;z#s*SYuC>v9w-tjg}P@jnHY^+LdF;(s_dLb3IQ%G&p z^(9$zD8``H_R>|h3QvpAX75QJvEijy$bob+43c3(+~4kts2BbATB?hrRLg$r73~%+ z%l+Hpwpu|l$xoxP3UA+mOZ3CG^!>n$AX>!w7!9hZit?A&=$l=qieev3{TXRnjn%MKbjqBjKbOd;Dy z)iDe1oyokxjI90dnxhx$Oe?M2Oc^be$#p*b4|4~kyfd6OiN@Zo-hG&{&^(EZmJpMi znBwn|X{h%7mRl$+FyQq=Uxw5}UWaK%+Bf-gN8?rbK4ZU^(yWwY_(W{@#2MpEkCRs{ zp=3YUiR)9Usrl>!p@n8uFP!IrTI!asLBJ{fVAe(vk4RH0qvsw;B18NGeiat)ztZPk zET%Ny;Z&w~SzFkST2CLHeS8!^y)beF55VUf^c1&MDq$>{QQE8V&4_J=R*j`jzgZ~w zSaC>ed9*t^hRiQ*{xVvYMRJLB&3-S%8Aa=hC6l1}m$R1x&GNLBQ_1Wz6s-YW-;J_{pH}BXlM*!Q}z< zh^o;M0<3;+CAxFT)x@i1-$^~#ROxP!FQg=wU!#{XAsuH;4P5m@evvY1TP-<5y>0M(tx3DVgqTRfB&CUng z^N=xRF?L;7#*N-Sad>4G?NH3C^~$m%JFQ)qp5>6g7CC;z9b2Up{X2d1LX25tp{AT^ zP`7S?cNa>R?^H!PtYPQ%LLeF<5$QAQU}t>WY^+&XPDxB>1eEL2lv~+fAHy%~sQqrj zuYYoa8tWpM+?o)r&k!I`qfDGzqFS|ZO4mS1Q;0|)@A?)?+`YPP)JWzm3=gi{n2^k; z`3x$zJs_vgKTf-uR;w;c#7URWlPP$ozzG{^Mv!*1H}VnhLo4cAR#S`egk3ETVx4tk z#LHIN65padTQ#`{4Cj}oFf+?CPev*{9l$)Ro(t9zb753Fh#g)L?g{iZlWY5=@4fuU zDuO{e(uD_y1~=`E*W7>u#j<$yAncrJbtEXATxDAeFjxwB{XHQ;z4g zkm==*=R!L#X3BT8FznOj&0QFtD#f9pmCkLH)0g zwVsYY7T)!l%#3b$5S>eh8e++HVj@EybYd)lktv*)^0BGEn^#+hon{Nm6Kq(FqwZ?% zw+iih(xHw_`i~e4SY(w>V;$B+>-{wpz&aX!MY(0_ox)PgajVI{stum8q;+oG7IanQ zE)SYQtonu0HzuP~e|Vn((=IkIe;TC@hmAYqN^5musqG3MuWj*2S1zBonpO;c)2Pbt zowi#TBFUPQWKBLX*RH>kr(4VL|ISJLlN4u53Mjfx4c8fhxJ1xdoavZxc?VQ77OVf9?BrR;nD1=BaC}*PFh2VLv zsziNzBc~I4C_)WoGXD!U5hljbJ_A^xbazq)tk3Ot@ST36ez6iXsrS@DbKmWIEFHBa z4?2WJ?9lF!knH_(u^8L>f=Wd|*CfDC8D-rx=@NG{6rh=(#8e zr~b2iGy_hppPNMR!vnK> zn!=j=CSWCzle|Y?r5&|Tu=qj~?w9GaGe4LhD>}TM)Lx#O4Z%MTfaqtmh#Kuh-NycO6kg!U+}VRNe%;NCgYZu!gs`wW!o;H zjhYn$auM#-FyZRw5`65Bri)fnj&mXHZ9Pp9U=P=b`U7#Wcg9!7g3Ov--<^PmI~HZq zS`WK5q_`@(MPx*=z-A`09f@crg(KYU;f%E{Hp}&}XC7uP`lc+4+6k?F6GruDotlcG z3vI{tkwxzW^G|D!2M&HS@$4KHhUMyD(c>rFCl+&%!Y&&2$_cA^RKucSKmtBAcQZQA zwC$klU7@Sk7~az1v2-kQy68m8V)6eJ!&3QUzs^%>02m{moq?9Nm@P_v;q%skxgR3a zFJNLZD~7z z^Oud61>SxLca{sf%-=qs#!K6pxw@rybHJG@ZJnQA5f`3h-3-2_mn3XG`jPdLex@aX z(f^sZm4w?!3@#bv_WTu89km`{n%`Q*3xEDYH!Zt4)4k=2d54#;(8~U&x;uYfvx)8A zU0g6%3BPK#l{9Oap!AZkJ%aQYdxu{>egb&}VcLz^lAk6qu zTy0I2Ir3%loOSu-kjCAGs(c1wQS?G~*_cOMUu3^mO(V=Y8KIQe`h~@e8^0ltK_`i3 z;|Y~Et@Vp;o(WHqEdcb=uw(X;l=BqYBpc{w&AX%RTgA<@V!X}P*H`%C5IW6b`Py#x z$qu&0Y3`ZdS0Ss&d}W!$q#cNu4MimG4Tft#roZcp`nEV1p3?Ls?F7_O8Fdsy4GHZG zooUuDWJC=({&%~hn)u?1NI6e6GkSX}EapwtW(mm?o3LqRp{Tgz5x*_r_DkTM_j&T;ar7HrA5MT`45&%KL@ea~UTGT| zMVM94u>-dGQ6mjNX*8*uoc5MD$%c^i*Z!$E3uzY$g3UL-S@p{D5}$rKh6n$q`t;k* z??wB#yze2|TK!U8(I7g`)Dl&;9t$CD4>N!F!*Za+$WW9=fs_jxh_6+YWh_oYztMPaa@blsF2n+v{0ac? zi2z^HlxHnTQft%a5W9(N1C+hV3PtmRQyTAm8xQ?XY#Y9&M<3WriR7C zfZnNY(`0+2WP}b@)o;;PPEvLBO}@-zRa+HZi2c*7Q;h`N%TZN2RNo3O9`%a=L)f`4 zuTsn7i@Wa|6^&$CHal7YBE9>(&BVU(sVc_wBVy?~4+z6mR-z0<`{gW0G)A)rEmZTE z>t{GC$+Ab{MB%R^CmYI-;s{VM+p-@J7Vky>Bk4AZHP7i&#RQY~9oxCk%#M`Gv@KAR zldqO3Zp)W=cca#$QO!7gcJDpJ&dER7Y7TeFRq4Z=3PPN~C$k|Fzb6hI{gqn`u#FG{ zTX+^^6k|t&B5E-CqSwfntx9p|Rtr08(YVs9=h+~o6b|%E7KSU!h30(all_d|*h;`C zx-K-t>_mWOF=2jhBiwl(RhhL}RrNZZE4=oPbe?_C z)~y2zg@9-a z%Kj0eU&(av29W_4dmz)mgpON_`;(^fp;YpDgD0)0)x#&B^6*^5`hi>M&|4p7FTAl4 z+)u2}-gF(CPM77Ip`5vs+j8WzIEHi#gQ6kLcL~qGd{B32#?U_sC6noG5{GAO*w+rN zeK*zc0}{Jj1`dTw2X#h*M~1G17zZ=l+y92U02fD zc_-tT@AJWA>7rlWm9N^Nz^%yhg9AkHUj@JHl2tPbczSH?lITKC9w#>T{n*>x%&Q^r zuzfLS#s$HK513O_iKvbk7M2{0c1>U*M%sv*4m2oj%zI(@n8g!F4GpSG<}E{tQt`L* zg1~q(7&U1e2yw<8QWG|bFep~#%_;8V#~J=S7CHfbv&qg z6*+(TeHDMidA(ER8W3AG*guh>8#5Aoaid6`=Bf!9&>n^hv-v^=EL#0N zl`&0ZkFt(m7PY4rOg8CQaE!|m&DG4dmBdqBKv7$Q3wYWEOx5I(an!nk3)^m-X2FyZ zWVPyYm)nCNK}0((%$`e5ueeQgtJtB2>IhBIz4^O|H0)@exnYU6>~J#etO^lM#`ni? z8QXK3;r@#6$SmEw;S1SCpx5>R$T$Q5JnFQgAXU~mN;^$wS9q6dW*r4Phb>o&9tPc# z^^cCa>BaZeJTH3HO?2~rkoTQoO?6whuVMv7dM{E06d}?3TPwNJ(TO>8Y7|~xq7vw*YclitC}~GOvW|@vzCLlgge&^4 zEla;vfqAr@G>u%~hL#pV|7Pl?U}kCUoP_)@#)V=Y^NU`-uIb5c{Bh^q2bNm0z)UGl zTVq`c6kgJdT(MJa(L3|Vhf;`nutPzH^F!wfHIsd%sKqz)449Z><>sevqY39lY>ER? zg>6wde|+_w!IJj!1n>HRU$}omiiPE_BRAH^Fo#b!~Z3aT*>s6ZTC|Wo-Sb z4cx1{-<>v8kI42?=!0u7GGiTCe1yj{@l;yb%ujYE%1v%eWV(u7ThFx`W$o9k_#W+k z+Y1j|0q?v`@6E9Luzhz)R??iA@O=Y!?JIAylXs7*aab+uB}6g~_vGN}siIZCqTl4V z5dbl7^&x2Pd|dpgY4mVuJzIZ0%M7+E@5qsO$~nqs6=I8tOf-5TG=5XVH@cM8DXj>q z4w`#n`mJw)1v1d>GCv)bBMrfFryBKakzw65>#($E>0n=!T>@)A)_I&%`?h3lEJo=L z@%7t}sO2rkxv|i%(;p4EnFnDcKMALRBhV6DB0#+*5Y=d#8}gX4rrX2Bd)%K>LsF8? zEn+s;^)2F2U3i2kjM^(jLVZbdvY9HS`v5kwGpYfR(2}|^d)vz0tl*leOw=f7Ni^#k zJH$%^>EDx_Y}Fu6@7I`mP6Wcvc9t%AXYE%~ddO?G{?JE)2UaY0;8IqCWSQ7@<=(&> zq^iV4Dn#^RuDjup$&fL)g0#|}Ag8Xfr$3okfyyeP6w&Z_G8Be$cq@c^7Z-v1dsf@Rz)%O^%`M z!o6bm9=tLbGaB94cFl^CwJ5uGKHw#-v~`wGk9F%B%)e)k%}FxaZtY!|LVOu4eaqxC zt$8kHpRoyQS~CSiSE6H*2z#^natO{1PjznjMv0MFpE3O^&a>i;EB4=;+db9(uZ_($6$V)$N|P4DRc%(S`kYnu}?d z(#bb6DPLi)-HOfJrA%>`@O1C31r#j#9vy}uGwYL+Gq<-XE0V~3E zcSsK7wWN#5y*NQ~rYVr(7TS0xM;-iBsR3qj=j?-$qy3E>Kp%6y1ZE|Y*W#XNYm}Xj z#X{8L`8L*mP&;eCP@>tjHbZ0Y#$9`U42Gg0?tw7Bs={QrF0KmpT&-|^rHKtKX zuG)u9hDwRy5qeH&ai$!gg9ehfgban{$maAO0JbwkvsH3 zIf4$WgATdm?I~=H|fRI_%y$qn*en>i7T$4yX?i{dVLGJf4RkZtKA|Cz6{45i4O0q*%olYnFZmzmK-v4jKirguOa-MSL_v&Z{fz{h@IJ2lUd2 z==xsvJ=P!Nj$k$BM2VMjz#>olV72bWvS>TqPxtWUJ$KW2I@|6rY;fMpU*}b1E}g5Z zuqn3E{JmkZTw0)DC;yaZuI8=G%V+y!8!xkqWR#meL9zz0qHWRylO~Oza0rDcA#%_p z!P-*d$9|YCcn@5uV(Hh#w%Vp!?2s0DNeDF1=}y{+ywJ#z><${fkztk zJ#~j+WqWk&Gi7D?LXl-i@{1Apu2f%6lN9n}WkK~>D$&dOT6R0BySgDtBh-^F9-Y$# zSo6=fV@0jLS>KCV6dp)Kb1Kb5>g4e_r` zTiEIT6>?lzT?>NKI>~p98PyN8D^Hm_!3$Dv8MyNsSq(ksvs1gV6#%={hT^vJCC3*l zn_P!=wo5O!-r5hb+LVlv!XpCi7j#D`;R*qF4x0V)^?g3oo(X)ro0ywqn#JV)QKw5K zrM^}=!buJPbsaGG-bE^|Cb7UtXJoN<>!-K-o9{-F53NVWs$4s?YOm*%OM6OmBbdX1tq)6zpAq1&N{8bEZWRq9=*G2DziPnGrTB)lljTpO0V6_SvX%TaN>K-(uaq_*@uIfqGTd#Up(kfTf zB0Ij_-A~KRPipG5g?mhR6ZXqxMvz*s%Ld8qW_>kc&xl5nfvm0~gTp1@`~cb=LYPM* zJV#P{@vAvK&ZYX~Y!Nt>pTRV*$kN~Uu>E%-ASzpBF!Amesv~>3S3bWEG9~9MWCkIn z8Ck4KB`HTHQtl$@;v-f#q15Er^Zl4eZFfdCN`u*YB)wcii?S<+NJEHhRWt&cADd#c{*9tM2FtKB^e99+PY$xU_r$=Rr8RFw+w3+u zMpb=@^L?e1BS_n}^vif4G(-ODw}m$bk9vMw${rB7VX0P7(=NFFWsUdx$fM};fU>9V zV1t?VALIq4!LG>G5AN;~R~=YcfYL(bV|0RCr>%F(xAS^3mq@W zLPvbxI#~BWEXZPl)BeQ#@TQ@j-baY-Q`NFL7K~x+fUr0Ul`n@OFWi{Bih!e!=G*tp z;%s&X0Z%rO_k)fjU`gu7O`#xm>=xykdDm^@?17JNo@GCs?VO91$Vapu3||(-EHnDc5CynePsIe-+L7?IOH%$%d^z)ynzAGze*l zCm-m#uGsT>_TsPn;S;U2_zCpDMEa)kH~SWw@k80|lk(IkyW*W*@3DLIZEyVEjzrXo z_z@euAjwDh@s%TMp-EM5%(fJhkM_S74a7F5Ls!6}rt?omvLS^Zs=eN+Of7JU9Fi4O zYjH=uRO)^nYRP4xJZ!HtEYLdE8;gWo^SatPq~(Dkln0GazEir&rxK%qWM!tkw2D1F zwv6R)X599L%{oT*m2Tzrrk?4W)lD8(4lM7N<(jrAHh{xn1-TPbGI`0d9rm|Ux!7Rt zBjZUcrCP`&I(of&zH#ipzH4Qf_EkD*cgE{uFs5Zl!chm+y1@X%nBe`|6y^ucx$js% zD|%|l2LjSoAfkgNm@LR^`MA&(iVs3+H>+7p{&8!|lisOrM$29y5E^E~CUJXfN=@7p3CuBK>garBGN%A0?A${=iNhv;B89=$FkUaQ*TBVH3^SEbC3tw zg0~s>K6}RIxvasO!ETaX+RHC?-r=T~5@n)zo%7`O1>wxcc`8U_oOogL6 zcN%P!?zP9;^B(&1G6^%BrN4NnDucMFFnG|8wdoAsRHf}-kdJ1eA>J?gmi01I!Lm%{ zGM@?b>np!#=~jC+c8Tp46epr5GD}Q>!ByVQDL@tO%3(nh_l>p)>D=wt&|V)ax7MW@ z)*W^y^~|rKnp;bL-t~fVK<@XlyRj_?IM1@lI7rtX-bU$APH6d~$hw%7@*|5YA8~;r zt`#Ki%Ej9XQ7u=nJ5Wwd91!-;#FW)QjQorZP4n)WV=_4s6LbI#FI4^^O%t>S9)Na~ zzp-M?5C9sPt%S(MmW0SJPSd=pQx7orgx%X^o|K z*O_?uq}t|Kk#qBOl$WXurjNnNq83_-;!^c4Y5FAbly-1AP%$=s%cr3>dEso}t|MEb z3UiCxCgP2rE*V!JJ!{|V99fZa?W98T{k#w|F~@{4WTUi`Zi}jpj~roo@=BSkj>`hb z%0bt{Je0*;0D-AmgGcvu({;3}S4*hubdIk<^_yC7dB)kc{mnPn7B&Ou?2n(ia)P>Y zm<{&)UOo2mYz~Ni6C?fcdGuFWsMT)@wuam%Yk(}0!yn@Go;`5|b$QDa+kNnQJO~&X z(_3q&0+5;EtWjF`x%Pcjqj zTjYJ~VukDV&_ZY}Yw1hnMYKIL`*M;GB(&BY*D{mVHzqS`oe>AL8JEQn#)?p6$aSq4 zO&>)j{|U{WFJk*I2`L8hwQV#o&pJDZyNDzX>Y|Fk#2 zHEaO?0^{?xvsb-0U}Sk0@;c3Cq?uNpX7>-b<=NC|yAiqjCr2sBndq}`ff$lZ&r4E5 zR`@QEv~kU3i|DT250McUXTV|F#vJ;_Mupx>YM#6TQ-sccbthd~NdHXS6DQ*Er!=^_<9vnqmwx!V&WAqYz6nZO_u95%_SA*3`id z4Bz0}(m2f0&8QWq`B0`wZvb`~Elkaq+*MakgE8gN-#UCa+L7A5$3d$maDg@Xvk zHUT6U|0`@^FVF$ax^Xhqi?POS7kWC02Opm4pDM%RfxZs{zgD_d`NTk+&Tf|fSJl^6 zyG0yfYzP>bDmOEcw7ql7+^W*&tY43x279~Jh?Mw_v0Mqz*8P5?Oqzd#4x$D1y!DIiMw6z6gP@;1Qxy8 znXshXR&jA;TA8QM?#{kd^1@)J&DQ~zn>3sdJMdddwC@3t&@Wkvv@=$|1+}RvrkTrq zLl(zsNi}?|d>9+lVPOB*LAs%QB{)5H!esx=t%i>xok~6SPj?fI4a(uYqpX#NCiVx8 zW0Lsl7&9+ij>iv^eOj_}(-$~;I~~O>>?^CW3+s?X1l?e{MppCE%#nAjnCtk8z>O&J z_nHP~-2Y%C!BQ`0;w{AsK zmV)DC8BR|=R4QloyB&{%*$qNsNuEKp*A;UQCHjx$u14#S1vGsWi`;AE_DwbKCES+2 zfD-yfu-o=y98j_PBXvzDg9i_q`2M%|qrE7Wc*jFoBG5-*+m3a>-%Lpf8?<6o#CJii zWve*8QHXEh)pFjXn*D~n9Sakfy=&&AM&BtvKh*AHTh7SPsBKCuF$jUWTU8Kc;=i3i7AtrLSE$T`9))ru+SCy*&yhGqY9#k_=i9*LttB zG$Dyg?{C@_h6AaJ;IVz9Urg~=m1F;%<%L0yW9RMz(xYPvLKNBj;(M*-x6dOisqe${ zt=%-5-Rdj<|JihA!e`Azpu508nTKb{q(xhHva9E-=JOM|9ys=-#_9BKSg77#PtJcm zfJ_Rq!NPa91&$xVc-Xy}8}U}@oBemDA+kXRlUu>YPF|&bvwYoAT&(1GkUa7K&L2fa za=;B#G8_5YGoYU{4O6!#lyeG|i^AfAgZoA~Bz;ui_*e&Jdg*eij zOQVKAArh9_uKbPm2Mk6_i({tYFs4`_`Hq(i|E-! zwBNwzJ+I966cvakM1zTS5zao&5Qix8Hcxoyb&G*ju4T zK&0xZ(-yDDGPtab;c*F}^|(0_-J&=fwm@MBiE=90BwRgJ*lYj1w9bV$BrvVeKaTm- zxXpN_KSz0d{Ob4IcW*?g52w!Y{O+aE{)Z!n5QV)fM+7S&?h_3`Lj`8RW8!OjTuZY3 zxi^S-{OnA7=t$#Po}d{}6bnlZbZ>yeYznSoV>@>Fia)iL?G>Jl*W@f;jq*Oe*!l^! zcZAK3frRCjb#0LXd(kwUtso(_*vVg8q+Y=(dYxtHC;y_JMlf2bv7}dIcvp5w81#+? z%J!`&?T;un7-8n^1}Dw$7S?aCmA1YU|G?Cik?6`Lchwd0EO>F;b`UGUh|s(yjEwn4 z(_>Y?-!Te8Pg~aQO_%}0jX>ivp3QaPhL{?fvS4_Bht9d_!UOK#55W3Mt(uy$^HIIRbxv@$gA>oYCDEPbm@1v zbvXB72GnK?D6dLg``C?=iOuDcreW5xlP{;eO3B^A)Gvqzd{+#-N5i;>UqI?A+ZtId z+dLAJtyXIT0x+(-+R$!K?rp+Fc?jdDNM>b z7Aty?bGs4!6*UL;i#jV_vsa`m7}$5xV<^sUz6>&r{X908Sx{$8sJ;<4fewj0Ibv#i zJCtFsA#P=U(zGUp=GfDVo*g*jFyU}YwJfV`@9?w~b7NmG zx(j*aOUPW`C5U&GL~Xp*<`229>2FdLx_-U(t;JQy#wZ)ka;}j0C3h>-ZYA=_8-ArU z?z^SBja=gn9uPiq>9Vi=G*(aTYtT)y8-5c^UIOqDI*h(rD!~ zgPe??nJ$2m49wS^XGQeMy@^QaHS3X<=8Yv|3vSm1-*>oHW1N&`r4cdPf!qZ;U`+$3 z@v&LrR6*=m%-C~27q0vdcPuMlPpzIKlGmdc`s;Ztg>it>*a4PJ=nC(h?V2L2+MZLf zsnb*~expk*|5WAIoFWz@ec;*_@XJkW22&by;Sy!PzehF3Go~ywM%i0$cA1T6b`gDj zUF(#5d_&I6tohC@6^|RS{R_es?AZ15>!n5PQ`Kt<`lj1RX!r)%z|;wLD|}2zv!!(! znkOiwl&|oDcj7Fq`&Ep2uW^~^?K{+a6CTt~!%% zwcC5<5UZA<6gfP-EH90FN=jX^Qw<P%sl-A(g!-|BR`zf-(OKX`xWOCGa&4$i6g|PpS zV8VkN98~YVcZEs}XL{k%N-F?8wnsjvhxvJU%G~Q?OVtDLWUG@H`;soDlmS0A$NT=e+n*nhfHJ!8jIY;ul*1W?Ht`yd$HJILbu258aGQ1`;_CEjK6S9}17GVv^ z14pMMMVirLmDsybDnygW34Z84|nYA60|A2-LS z1JaR|^l$Ykw#+y-8z@wMcUb&zZbbIUrGXh)1_^()Kg>Q&zT=({Mawe2hnc#qxAoUo zNPDX6o-b1%Kc?$tmQVd*vL2Jd5)W4g+{w{mzH5C-ZgP4&;F09|TGXn1ts2#HJ@OIK zX(Lu4S?luHoJYc+mujpT&wYxqBdkA0(@bK$&tzYw9Hq|(W#-&^-qcFZXBO6|{f+I) zJ5Q9R|1h6P{jbjG$&tp$zN(K;^u9mfIZkQ)0BTL2Ei1FO*JuhxGrs^uF3N&i6%~#r9s4fW*VI9n~ZE*ef;C$OM=oM`se8!jEqp8aXhO+of8SlGGEK50ta_^vxU0DR zVE#J37o{r5RZ*@g`PVg=E}5}jc1_wJc70UimbBdqp2f2L!gN2>MxXNd5_|$DB`gO! zU-FndTgv_QfTl+;;D5DT!I~LULXZ@sz@5rMIKt*p%$izWNB#M zLUmYi&NqxSQ5HZ|Jcf^6JaquOtiC{M!)ek6yzZ)X>AK@?#ALdPaBqC&V&ES3mqVxm zk6yPNUx-!y{4+yS-#2NTZ$FY#p7EhU$YP8Zd>Tz7;n*lhS?wiy;e4^qbK13SP>+~Q zf#6-W8_7XawtgJ6h|^J{t>(E0uzmEiE(XN>lyEvtgF~nGgbdam5B+hT=lCSLQ|!|xU721h(KpSw+uN_&u6{44F~b^{|3!I{w5 zFz0r3oX7HWh~pz&^Y~N4X&JgTwR?1aiptu}OFfE^omaX%V7%KkXz=>0#)=`k-r){B4?yoh;Bdghz_&NdPZ8A$sdwc zTOI?Y+Ggb4MKQe=)3PeNb2;QC{XKY5}vZiL#15U+bi! z{Io|5PY>Q392zypS%I5$_13o3Z5}73prQ!acPhJK94a229M_06!%iV~5Ojb6rp#Q2 zu`iQ>uaRc$6rlo!t+IViR`q(P3ga>!7}5Ph<9f5Z>-PidD}Q#lxMVc@fc0VI?2~So zk?VWAmFxX8iV>hy0l-{x$pCGfaWkt=jfDZog{cAdL@i0d`X`Eh6ZDH|&7pqfYSEq5 zyJ#8fIcux^TW;p|xCnA^=O^62`=R{jmF$(Qg?%3bWsaPQGeF^!(Nk8Xvyymc|n{l(SxDt`fIc!XI*tJ?0sTF z%>N2l<#=;Cik(~}Pdc#b%~Uat(2f`2noVGA>^t(lsIr^;@o!%8P<5}p@UORPyygkV zrnnR1&tJ(0Z}mlV^A1cBR(y+2-`PEC`DSMAMib#q(!;03>adQ5*YN!0!Q!XAMmCmu z9ciold6J3cL%-1I#BwOAv?me(GvN(*j7Phnx$7p2fqZIRHQ^~>!7A3Q)J0b1`{UQ0 zHfoA~3Rgj^9lZiObtl-c{Ud25A1P`V8oujV=o;Y(J9pf0TiYOwgvXVjD-79Vdal0L zit8#hkvAK!ZE$~g4Yre$X;B&lJJyM(E4)~_p`hOY2_CR3JQ`PsUE5V8ROOx+6d8_N z^x#pe!9^Q+fT9v-KVBOR|C75(WdTWRV=|3P5j1DLQBaT5j66OA*n&H~_cQ-tz$?TKV(jird-I8hu#Yy1rDbj5{dU zF2m;OkI}>rhqz7e_W$&@w?_+#6Xg=ZM9Bd3A{|w~oI4zD_|sPY(C4q$NFXT()Yq=g z6(^^ew56p+pJ1@*_2Ru%Y4WeZw_8>)XBL?!Or1B?y4)4~Q9*D)r?m?&haM;}U-sX4 zOY^%2-0Qbe)SurWWl7k|v;A+xR@XB1he+cWqO@;6p)R0j^-`H6R!=J@HHo z@>St8nZP8x`Fnij53R!qI;7qjlnGxrdpvD&rg#(h5mbxGQL2Jvi|*nRF1B8OyA!0) z*7P@{Ymb->#1p3k!Q`JQd)B@q@1Gs?iN}#Vg3tdt0tMH*^2>9C$+lir^00nse;^|OR>{|4rC{|=a!6Ga)6$(J%vd1)~J zHMYJBym;dx4f(njUd+O#&-LE;zsv$z6ii_l+}&!Vor+so`TWF}pGWnGZUuA6rDr0CL6> z`bsJQ-Gu!v_#4qwo=86j0I*G9BL&@JTro^4kN^8o%Q7hS@}Z+>8dQ+Gy}O>LnBbKh z^)I=kq}-k8+^2fg)SiKdBHSGN+gE-Xt0Pxqif?SlwgfSmwha6Tb&TtDA5jY@9nhzWBPnz~u}Y#U07~ zkBiO->#{JMeJPYz+%RB}I7$j-m~nSR57P3~lAbud!i{sEHzhZMCyYCb;*M010kL*w zC8ilgU5!Ta_dGco`H$hT(a}XtsA&hG&PKi0hDwrUTEh#x!%pk?`x_ z@3Gm`ejeV>U)FUytvRF(aN$8U)5Aw$1rp^)!o)PFWRl^PLIJWJsJ2PzBp2m|D^lF= zp^?vSW3x#euhWDM%(Y)K2r)H=1Dtb_Fcf!koT{Kln;g*~Vd zXFp5nDn*QwBsy{Q!|3>ooRYh8Ny#F!hJR5nrcy)%JMHes(H03%tQ#{Nynoi_qf}W9 zHg9}ALvnUGN+T@s`X1I_veo*#w~L83QB7+p;>uwwOvDZL2m96cFJ8yb%AcKnv^)Ji z!S{3Y=lq|rdMjixukHjyOBVN3u9(=c@R23%W7f25wu;r~Q5- zi~DKYW;M`leFGzN11$(qD8w_OM%>pz(hf*UmfsG3;$ zeToEAy1-CR{+A(ka)o*(<3Rg@j;$v+&#ilg^F&cM?&k`SHOD~Di4o^S-y#|jD3iY5 zT9&W@HnBYl&la0^oYzEhg54XFfLrm@CQthn+d@n?XicYBw+6UWsrubq@G@h9^2-!u zhHA6nb-KSQ<)e#egh9bkTZ!N;#WeIM`-wH@gDb+CA$CcZrKJg;kZZTpy2QQ$1m_KL zfpTgWVpgUz_rPzz`-NMXFGNmNbVz&hdnF!UH?Sd9zUcI<9BovMrl*~=&2Kf_PoSFl zb{$d>@D)DK04gXeNgm%=$h;u$MyFKrW9)B|x}Raz+n4&c>8mKDr=Vl~soCl6cU8b1 za6udO8Lf2jt4?7?JheM99Kj z3y`Tu@e5DL;?%Fk)XyL*p5sr%VdcA{fx`gl_W`=gEW}x*u01DLa7l+m$eLq^zonmJ z;3Bl#&zPXP8}eC3Z%yKE3_xDVAJV!$iXtbb|8ck4Osai^MCl-pe&9qLE~Y;hyeW#E zP~5`ItNVA5VfiGD@M3G&6_dt8fuGMm(i9QM@Ud%3cU5lkoQRK2+ga4^!klVLhmoCu z^56(PRmz4LT!5QXv8qr7G(a2fnEu^5WU;Z4*me%reQ=kXLs2B}l5{#t9N#?=;@feX+HuhNt`r%(x6ms6-U601EkLeVg|H{#G zd7+jOkqNkg6e04;o|&1PjQcljXFcEuLffM~3nidue&uj5cNi0*VX`;iuH<^6yY!0| z@2$woYeuk9;W~I-$0+A1g|VxOhkt3wLIXe|ln!BV?DjMu8qQZzFIKq3ECGz^b&s4} z^FQQ08k2Cvo5i}x_=zbM$Rjgt>@?5wpEj}^kuZh+ zUS%?6=ZNT}XKbVkwhOd%xjPDz{hXt%g5zm)soc?D^WA9H0l{MaJ&VAn+>NWidl(+cM?@3li-7OUp$?+G(%23lFx^(_-r=?Br8}L;{zKPAv zfLmp>&?l)k$0e+^02s(siJL#4q>I18T|EQ{Gbq~*EWBgPc;51D{vcaB0Wg|y%`3}E zc+N*2F<-xmigwz1N^{q)^U@MQ;LHhh!c2I7dX>j_^$yk+r19pv7niKEs6${&_`88v z&h?VHk0?u@T@{qMIACEVug@^^Fv~B@&usr)n-k_vSu;E*@e7C;|I()*)`a>s;}vzvwir zWNz!44_emo&p*r8euSr2bD zS2}S4S&BGN(v%U}yC{z~lJd70gBL+Te~CfJ+4$XAZ1mT$Q)PYMpdu&IT}t}E7>jfp zbsJij^&QWK5z+(XH%pshak(oYcC_zDp3yqy|KhI`RWGFTlb`54R-E)X<)cB=0JN@? zh>^E(wq5y;{Q5p}{unqYnb>H9TwcUn>34(72;gp?w}|D<(7Lc>CxdJtNcYE}$V|rt zGasH;(JOY5mIBi5IH@tY{Dbzct1Co*3w33Ai@`eswol1vOo;vd;DTo5UL7qrbLi+h zKQhtO>h;bE__@)gE^LA{u4NDl{qZWB#=L_Tt;|ZRhF#x_3^#6M_9MXaMmzUW)vMd6 zGHW9q6R&p|pr_|rX^Bk}vo1+PVDAFw+L>O(06?t^Z~^t2quCbQQ&W?3r+a@?mn-(@ z@_p%dG-<;LpEe5^5h)Y8^F_27b!GuOc?TZaUS zQ~vv1)Gv)@|fRo*-Mi_SZCE!=TdH|@UNIhZEAM#*A4H^%WF zzH((eka*#P*&I!hMAx(Mk>@AuZ&v5&mmi4J-%ttL=k_E%Mb(r0KSKr&U??T~W!n21 zk3VtT9rbStq6U0XWwQ$0J|isar)DDn)|Z7ix53nK>@VD)JXk;dRRvdIP#a z|0k*sQux$Fkd*Z+|4kNL{a3PpzO9_78$cPbn);ihjfqz9n`ezm>m!6I{4Thj|CfU6 zpQmTPo&G+4we!ON;~&n2&#vIikI59-0Psw8H{81|TQV#4 z>|a@_LRr#5!v=JQd{WTc3ib4s=^a)E%@MLBL+Ia0GVa!Gs)$`6w|EwNX}f#G^Owjx z9v&Q~^81#SA8g$YQ!llA_)8x5Pti*%_goT(QjW6N-)!tx$ld9`8vP_l3WfensC>}N zCxPif%i&63t#rkNne+W4H;RLe8$Y|8a)?ddG^rmCBn{=#kv+5i6<+ac>F!JiW)nJ7 z*t6L|o@DF5=*b8Y|8etdwQKsooi@&E#nJZ)GiK>T;eu z$1;fQW`1rFNWoUr;@SOhBkqqoO6X1jH7t)>?6tpGFTXVeZ&NjU>z;U77SaqWr8K7J z(OjoX^0RNXB!ttZ6YNRQ`^RZ!9UR+oy2aHf_bA|!JNdzk^E5{V4H(h%Xiru~J zDDUB6U_hE(5N>r7YL0TYO|TnjJou; znlq6`v?z#jlLhROB9G}w3|85_kYHc7W@(%GYWzbPic~Q>@CIS&plUTLpB(Gf>>H1s zyN_+yJ6ha3PW!>@Zj)jwi{|xP_rp*&1{$wv(IJE}4O;{`@f3mg713H80Fmgu;+Qa? zUhCM%{Wc%d<=4a;z+~Bq#*{!O?Xj6R7J()1LBxSyqp4P2jmyx0D8rr##9qv7EH3$r z$6W;_E%6Oi_Ai@q*CYOVSyNx6>Dr~|(@?kUVdq=y;5lCiQ!Z=2+vZIo-s(SoM`?tI zF^LlHe(Ljr0Y2+Y&Ywm6YS}9?$BH*yv34ufk>7|iV&(+@Pp0KLa>mLaP=h=2<^7T&PuD3LDybR|1^b&sVF| zTS$euT}RdR>}#E6O0xOmYa>9WCE?3Bv#RlWPc%LuoLmH3?C>jwJFU;goPM8=Z-&3( zyxOcH$vR!#D>nQD?tSp=S^W=d4~43!`*5{7O`q-)^BFft(5VTq591b5GMrux+M}YU zb>}1cXd^hg{V^lO>?`|@gT+zqDa3(C4;qDNdG4=b;u?KacDtpD@y90_5%J7500#p{ zP@B%I^J(aoPE+s2C1NqHLWQN`APB6bRqT#q=b+ldvH==0LE?w6D}Ca39A{85?A&3i z@SYpvKsoC3WOnN4{X1w?W@9|z6RhDCcP3KCZ#}(8Q3ktu!cmvM^C?!ht$S^e@4#_3 zX1qiNnd;~U8nN&}ojA?7tIGHJct3l|9r~Z2=-hS0yddN<`0kx$h(72wO8!BndBbtV z&YysZTzVJ&?5VU|Y+y5qJ9oeE3F@~FMZI@Hm1H(6{uGF>Y7dOv+S_6u-$0EWu|>W_ zSyb{cA!-_vmi6oEoTFvs3i7y2`~&=lhtNppUMYxyCAqoT7vVKeVB%!GTde=1n{Ct5 zfsHBxn0q)gX5|xJy!)57HN740MY5>q9gX>(MXpWAaDEqPG9Z&64zto|Rrd~TJzw0~ z&q8#G!{v@{neBmT&wQ22wq~$(+2r0sDo%bq*rRkQ#tj*gxDtC^b6}*=*lfVu$$9?h zYHm=^y1ATJy3bEOVgQLt9etscuxf;x_zrc?|H!#wOe&Wzs+%~=A|Hfow!v`bosLU3 zTHDY}m&UjSh;Sxu)8R&(BiX#kLI6xfn3LV(NwBKOBbTc4id7BIBnYiO*Q)kwN@BGe zsOqwl_KR`16i76CRP@ zvBCG>GRgJjO%&v!Bx^Go-J!|;9=j9el$(#j=6H1J5X9qk?#o~ae1*TuE*V6OzL^<# z^7`rZ;KlnmzQ$!*kxo%l5g9K(*>@sLCg>WD_N_=oB zRWiqNa>cUY<1`>+;HNdDA2Zb1S8@GXi=eL89;YvgtlLQ#D;zKAGhH$n9<#z zI-gC+UB%|ZBZRN2Akh+Uyl^MPdz~^V|KMGop#J1tu0BqG!c;|%TDxlP19b{7&xY>4nzASjf?3MhxjyCaeZuy1 zwF=TZmJxbOr1@A$-RIM3f+iO;P%11cQfK8RpqK%#JvO=N2N5ivtV!2|aWRm?MRI)y zl%O=SQR8a*Cew5L^BsT430>ya=ivq8UO-Blaj*ru)mFl#%Re|yroAAskF-Z?pWo%K zYo7-szk$uJ%VU@ulz8l&56uWcNnNh~#5_5!rIXnjNrMoY(}KT|_?&;(96`Tr;tXTj z&_DTDV|21ZmJtTd&HL(1_OLX-HlOzXgNiYrcXik<7DVRfR-a6LUfRA$q(c@XokDIq zw`Hd$nCF*?fNJ!ZbrfG9KLtRx@%t@-t^qS}8pOP{oqT!4XR_?X4N$ORM^>F+k-Jue zFJoo)Eebxns@*WVg+m!O~7nZams94-u?kJE=!*W(0Z-KEG;;S14@PfjMD7c(MJ5(Nuse+ zr=qPCnjacg85p^iR}sz2q}ir?liZ?C6qB>iSf)wVqeQe1-^@0wPhc!>@~Zbx-Rhz| zbJ?kALUH>TXRMcp(wsW}-L%83Piko;>v-=%`Nqc=Fuc{9ghlI@FJ3QT_K5!GC6z!L zk-WsD0&QxQ3~jZnN|YQH$@?_TZZ@;5s%fndKbVH)RXb8)J+RLE5f~_DO$5J6VFuAu zs?~pL44e-DdfgTc(169e)60zMLL?q1^+~++D8Z+XlX3D; zzv*v#%&UpC3YdRGP8d>ZoSXZELhCdV)B4T71GV5}8f^hJe2Gz&`EVyA>QU@jSAdzH zeX!0LEdWHreEzoR{Gm^#b5bzC=d{Vh+9oSt!Fz8@ZE~_Yp_yV>$L9o`zVCSwyanv;J+c$s(08jY8YfiN*>n>EV|% zREe=d%jRHx`F}vLlr+6z}0Fw)u{-i2lp&k#@^0KA*%O zf22}xYg7LXhWw{2B9u&X$ePX>b11=;UH?iJAsA!LoPNT8GAfrzHZfd_R&bvmizKrm zvHw4^BER@Q9P#fwUNDp;TzXe?rL*02L8X7rnC!G+{8y)qaLgHW!w*hY=*t?vP7iSh zTyUp7p`1&iBD^d7EgL@nZ)C$pUO6GWWXXufL)`UgHdj(0!P7gO%$uBDqsNBdsUrJ>{PNY<1KH7FX&BQ~GFsl5g1gA& zOldoWTdmSjnbQ!qO!U`_h2m~GH1a1G3m~nHZ<$O8I6x)#JnySr=Sm4LO?8R0Yp??6 zpT)VP8Ba)IQ(})xj?3*9OwpClF8!t8?F6jgdpvb^uUt<}q@Zgv=%Xa_houWsZ@ z{R8O3_(fasjx%wnP0@Vc?u^D@mDFI&4H%_~}bDf$MHh4aup0mMVc{*c(njPKV zLlibtb!~Sj=7+iU|L}+AHSXuJ3`OH#uG%=6Yun85EMFBpk4Q5zymV7VvAP-kC%d#BBI$ zvzq9~CRexAjq7bbR8@C{b7UKM`fb$6pl5i?+s3_WADi{Z?3KAC*RVDyv=bhq;M=iM z5LljdyHI1;uQT_xhH@N#;Lnk9)++DW=*K(gP8C8(yxU19wv5c#pSx*3Gq1UDN)~Ul zS-Y-V9#S3G#d#2Acc*jx{$#rkI(FEui54W6^Zv2lRyhgFwGl}kH9(Bykxktm^0ePo z;fJ@WLdPto3CV~Oaa;U)ZG>}vVhDC0{Y@byrsux(n5^5NCwk(Pa24I^*H=Uq>;E)P zU*W3|`#!?4pA)J|K>TVbtGU&5I(O*1RSXV%YL`B6A0FRwGd{;jGzBD0TDP+BVT{Fa zE527v%*p13np7Vl{0zkal`6;8yD&vmL-HoJElqxQ zqD>L^Q5K%FN_-#k=p(k8b??Z3B~Su4);;v3Ux@Ip(cn5A^z@y4%Ml+a5ouws=Y0#6 zSA&j`Y=jNEh&iza*-NdDSwoAj*}k`VhDoO}gd8`_OQNw8v1KOmwSO=&E5B_fTiP0p z{;9lb$gHk$jR)rS!UM{bZ(x8T$eT?P zCdO}0`wVKvcitxWOM0PQHXdsPQ%_9x#nHy%eb^*@yH`R@3xq*>j+!?#>S{|Rfwn5- zEacTo2vx(as(5A>ITYW`{`9K1X|b9xhilw~^6jYIwnf)qW@GH^{10XO#(MLw`Pg3C zAM`T!2QCdU7KnsX)u#g|d8I98hpaA+%=vpBDo1`vnd+Rm5$4{a8Zr58C4OEdh17!4@_Y+H-?pBxO`*|NE!j6vBfkP%wA%U$m3_v9 zW%?xD_Pr~f;RPp!S%{~QzLko`y8NM^NeQ+-6sU>4t*NoG+lFT@*K54<+$_-k7~wM> z&{`eKzb^8ZAE*VNXYRa8ol8lndVYn6e!Hb0Cd2)v&K``Puq%(2D^ynbsW@&1U^k+TTpM`t#4BVOCoDsxqJzS2 zEM-Noad##Ha`&F}IM;eadb&F-)G|}h>@+H7<`s-W5;9q1-4q%Z z{e{BxPz#T}3_aR@Z?=5);hIWu?UU7x6~S38zQvCQY74FLWwi690?VM1;y7yo-I!Y)v)3?w=dHUpyU(7bNE*esw@*4;zSP!Fs31&ls#FR80H{`q0k*ZY+F|Irfb3HCrci&U6~YBg2dD3 zh7$mci)#C}Xi(Go&Qt=?*!@04Mqg$gBb+>B?&hGFua*_H?jq*J%CsS zR6-~Hk5fZa=C8P``Gp=t@zCtGomEh`f8dv?kcRMsf_+&W)DERl55Q5VEB>X742lw79G4g90lhKM=9b(7Q7sh7yA-)R)2&P$*C3T?(|JryDg5&%+Od* zrcoX{o#01&4uARHUQ(IFBZ#L2a8o-iU2@{$gXdHb$O zm`Og?8N`}ab-sT#%CR<4YqA}FUE^O9Pe9Fgk4uiQ zC8&Vf(D!-9`Buo{m9Pmh0I33J;3~|<_&}(+-teuB&)JAY)-h7oP0F2!Q7 z0%k_y7@n|o{HLWAl z(yGqq*BefMY%|;eGLPjN89kH}a^+-FP+kh+ah1^i8$QP5U~789ggUSAoa0l9MJ2xDcqk&#N@))$|S#Q5qX!2 zUJ0yV4j;Fd^nu$0>sj~nmO2y;V-B8c(b~o~vgPD`C-@Sh90hl9pT@e}^sKy~?euWj zR@K*WqxmeE2b?#PY#*qfdU`fZ6UW=-mAp9K^1fqhCDa4{-)bJN^V$4x-~ALBs-JJ;N_A=;FTm7N?zUIRjV-ReWmxnq&jVN` znax$$*lz;9vFI@DbJri|*5gKWA)wRh9z{h{;`8As<@z!4YTH-cr;T_A%ZKOJgLXrl z{Lz*iL7vF8*f6&{`}|ivQQf=BzWSd%N4#Lj)ao9cz`bPjbD~%KCe@Ht=5p|+YgkVb z4eH6H^=4LC)K%*Lo5AF?nz9ZY`lHN_vz6r)7EcXcD46m;ogszqhRI!cfdM9FlZ!3L zv8P*Jxt>_7{2hxrFXbGensd7kO^f3lp+Gd-jtuk$>7L7qmS5_gFow84bWQmAXpd%h zp(-fQTIG7nTBM20%Mr1O1Yp|&mo-lAN^!*d>Zq5vyj|Qs<<1g;sp(eA9fe^cW zZOmv~o}WYIp^I9N5)Jp{#hSnm9?eN`2tA?54s<}2x;u=%FY|8u#!wcY?63lS-o!HN z$g*n3d}y5oG4-Tw7rEa`45M824x#N;Gc?ym6cmwo*PYgy zovpIH96Q3W{Ads&E3b!nCQL?|!y<&Urp5Q+emwF00#~sr@1!hy^lbF)zO*4V_b5jJ zF)i2o=w-2BXDZlfK<=t7TTb1~Fvhj9v%dQWFR}fj-O~bk$%2-XCJyMaOT%M$z==YK zgACc~*FdhnWlP+Mx-j4wxwZyJfpR#>{#={O+Qv+<2dpz~fGAJRn{09r4CSZtAdVe+liTOg?B;)Y^uQ+9(@rETEzXu?A@ zH`z#d+lN7l3jve;m2kmEiQ z0w%IBI|1x8qG!*xMbpK7?mo{tyBP+e5GFVNr{-nlda`Pa8206j!vO6-I5Mc_olMwt z{;9(b;TFVtUh+6f*(ViM)+V7`-aWoz*fbhQy*QcB%zBzaX@jV4T{e!%7-MJ%6=ypr zO8NBd9gM93tG_AJixGwl$;=FgNNVMJTM9Pt1idLCb{e?5?8Kh_B@);&4y)wI-$6_;r%ImwvNId;C3#g?O%qkOtr+))(+7ZxJ^kRhjDGv#=Hd zZ+7)DPCds|@&sY;6!P=$; zNI4}DgH5k)JQoVLO{s^yviUo-^?%SW_5XYH>!7=4OEeNS2Ihok6)er9k_c7puGLjyq%rEmCE{)-`oSCR?%4kFsH_enngAElfI5}6f@e<_IZ(9X)^ z!%q&U-5cpe_ht-T9qF{h_^)pNl40s)|7Qu`mxnPz3;_70$M*QwQPv7h&IWI4+2LS!rQbm}`)b5<`OJY1rYxkkW;feH80QG%7;^|i>LqDnrE@$2^)HTsH#&{#Jt0tH!?%NrijQ$fMS>O74*mbn2~E#Xf^5X%4~E z#(*%cJVf`}^jvX1U>>ACw)Kitp$c-j0!WF0s_ zq*oHvfjb=Q9UGwLOtol!DBIg+3hlm9$O-X)Y`OxNaq$jNK^Zoj51?Om1^v?N?HbC5 zC%%oii1|?ocsAmBmCmfxGA$L8ZCOrVC`LSapSH?7DzwwB@%Sj7#Ga6aNjW6$DJ0{@ zKZXLJ8pzuiZnD^dE<>KI5!}v|ygQiff@4^ppkFSoIu5o6FNkW@iSO0${29NO8#PX? z4WT}R{@usMrtaB`E&O(TitC;-Ns}|SL3yF*G>of@N+{pU8#28N0Ts@{vL>^L-j5$K zcG-(hwKlg8LUXpz{3!Qcvu{@OTd|8%1ypT;$D)}f{C0K&+L|FQEEcFmzsqc&8>}lI zS9zZs1IS&=jtToKS5NK!y61_Tt~IQNv(@HcA=X+%#sl|{4&NK4i0Ion%P~=o{HX_R zGRNo3L8}GL`{|xjL4jjR-ogp(6f8M*A-hTSdGE7%|8?;HM-xZ?WunR@3pfbM1cBss zPcRPn^F+t}T9^kRU1l8LiAC@@PZv~gBDeY6y>US~WR^MX3Ho%bq;CL-h0L>cHlIu< zwELD9@)aMBx^y8&XuXlmdYXfja}f`XBpGjRIC*mwN=WBd$Vo%^N==F`9XolhS~Nj z^zGe51?wtrzws?;4-7t!duk!?j;=e$AgylDFtc2|FM+iP&x zG3(jC$Jh$$xzo~xn|1E~?$;h)8_6*N-Q+Nhss+Sye&+5P_z*NFQGu6pjZ!S|De9gs zTjoLZy+cg}lMn!S3eH#HY}B0NoH=b3BQ;o7EDvaZh6DV?t?W=+u6ZFJE{-ukW#*g& z<>D)oK_3#M?pL|k_!sqZ&Q~2jdIBnSqi@X-k1&f_X>|CUKXlK;UBM_N3n^F@Hsdh} z(~rpybyC!_c)f4D=u#pJLaW6M8PN zHzVH?S|(#+CI3BXm1%$ZWfh<$m+NShkHm1hz`KMDt1zs(ZX(?k!npIngvO0PLE z+YZ6hV}|$JWG2pEA{t+81-Wl=-;nZgw@3*#>pZ#iFWOZBXiszuY_eU9^NgE(BuGem zsHURzUX;vS$`oKDL?}UR&-(ie=^Ega#gwSFx-SWO|0u}j(uI9l0QL9g5fqR2bFRH< z?>LRvrs%y<5ZuYsR9Bow?J07rR=uRe&$nT8)IXVBDzl9;jZjw?oYbsf1KpVG})E^SzL4RtDA73Xc{*t-qwCP%ojef zRFm*T;~U=DA7Bb}JJ}ts`3+Xh!!Aay?Z`<<9`gWYu07eRUp6+??D`F=ir%qJ1LT^k zys9KZic-(8iTb}=QnPSA`#Ndt4 zEvRuoVt-|`5edxi)Pf`1{J={&83#4Vpq^8|rL`R@sS`nFuIAVEwvn5fBBHvjku=7m zO!+VlwcUbgL$B7RN}DFFU1kpLt@e_W?;S$s>(3gFG`zn9P@$~f2fc{ES&oP~Dc6hi z)EGb>P_*F2jXG$+0o1A;6jOOHzR%fQJtR@FUOCzUY%x@}d z4N>4Sv0pJ>AQ$V{>zNe$v$Pztn}J-CeBEXf=TX$wy8m#4dGe*ZOlF=&$LiHO`$Ari zG5zyzTPyEi z*8Yta%-jms=^NF|B!4vPP)$i{j_euYo2SqrGz8JuAA<+OmGWtKRWQz9zyhY_jcH3v=>LorC z4(HQ$ey%d=EiFWg!Uh9ifrN>+QUnKPhQ48%tlLvX?&II<`Kk4A3rwk$T`^jkejdEh z>KY%Cv3LhiglS{t(5)U@DO^D#zcjQ61t`I zy;^XcWh;I?qXDsBg{#_MoF)2k{$8Plsx(f+_o`b<;0q6~ zvEDi9YAug3ze=eEp;PCE^o7A7pvJm=G@Xe9dsbujggb-?W*svScOtK7XWf~6yHlLD z*+*Yn%g7C)7@b|=#TAWyRDbGP!+xtR=rM=Iu+HoYX2mrH1upf1Zud2^BnXU8l`h$f z9Gmdf9JZeRyYcq>d&;5G$bGRP(fY+xU+dl{JH&D)$;@k59eB(;tG)(ix@~!D@HTlE zrz>LdED6SJLAI(f5%L{yLK@g~#`U5nde7kewC&B9OdxQ1qsb89!QrVxI`cn^0&cPz z)VL*2vEsBc^(yPTrF^m5q^{QU{?B#^II~4?)5~ym=JU4Fyl!_#3al?7D~BlP?qWV_ zXZ9xqd2gLsN+9zCSWeC+v^-dn9x;aflg{k%?@J^iW+c}K(l6@89DBne7XCRK`ilz& zPJ2%e-5&ta@+WLo4RbOg#ypTTZ)e9Tlal`&anpj}4`UP|pL#f?ng2cv0xmWl1td>2 ztDJAYN#g>?&YBzYSP*Yp(6R*M*QLL%-|^Meq1i>lB(0NJ| zetJe~+>Jk;@^3{`F(gopdC>COJcR0c7oOf7bJ^ zW*+Gu)|Z{%U#zc!t9D^~2&bYEcF+I-mdN$KtqcwcjRKTlA@i<>=p0Cm>`1TL4nxz( z_5s>za)rvR?vL^FAY7!e(U$OuSnj%r?u9peCNF51?e;xJx?ux7k@e`1clxWs+G{0N z6*mRte~VV~T4~*pNTThh_$_{cIS%VFEk<-_ucqz;oU(^@}*_F04ezFlvwGOLw zRcp<__IUwD7&NIw>kIcJZmxLW1?ed*Y@Wl82IrdgGLW9IA-a8xDc5(sEGr`pm|A^v z_A7-qW74{Jg(H!dNnZcmVyc(Ep;Bg>F#xjUyyhQ!%Or2F%`;wCI|evK`gY4j{qb^U z{Udm;Y*tMBdPd=OU&;EnS<1=UG6U3bQTqPpfBkZP+aKj^%K+7G2!U-0qm@8Y5823D z*t|y}7f|2*bZWNi^k7Lykd|q^@c(h~!FnEt z%QWqU?-GmCYgwbM(9kXJ6`V`KQJcb40fylcM>fj2l-)vjTqy*+op&5RD?O;5)3|#9 zU?@zSd>R$Ux*bSsS7nOXZLK!aopt&GdT7cy9YCi=J$^XG(wKMh6t!B82PAcDImk*k zT0;ZdOxAU^xUaI9!P%xR?D#z8TAytSY)nC9adKHK`rg-M+eOBd%oN59jJ%0UQ(LAp zC`houKY3<+i7^kK+6Q&}S?V*T)080< zZeW%kx?V804$D#^X{;&1lSI39tMi{%Q#3T~<$fMP&Lg>$5blUcJ-xrM(tOQEB6#PL z!Q;yfY$}~*X}!->fD=EMk>*R!vOSp3*%?+eP+VDKc`pvUvI%HL4tb!I^p0{?_L#4q zWZngBN_kCny19pnuXDb4oZa3f(tHxmr&b{LlVw^rzO=!K;6ccmyhN$lc?Lt9%Rgi3 zxt{Ens?XWmK4?x_r^u%slAPe=V|-==0+mhfKzH^UD33$6BA+Vy<#SJ!ophF*2pp4U zvBDNcQ>mdPQyw}x_#ewbl0|v~5kM<%z?58EyON~ilk@uU$Ckhabaa9XzQ`YKxE+0M z#OoUatIm#5M4w~Awu|$$ZeMP)V(m7we?8(JXyDnlbJ^Yd%h|IwZl(hvhWpYT!(mp7 zxECz2!Cpz!sRe#c3v0MBRdvuFT;(AbtO_a!a@+!3tpjr>vEzzNb#w+wiF;WjBqIL+ zHQuGKS@7F)SK!D_nM;EJ#(O^zu)z~nuJ^u5)H;A&m>pzydEn;cZPW`CL4BA2JuDR)1`I!?Ke=84t)*=08iHq&x0Qnc4#e zLSjv@JDpNtY^`iDE%eoB?2)3dgqq>zF3Xqg{14Jyp`Yb_<)SM~QGuV8w^X3914;YF zdNG8hot);)ty_>I0j-}3s{2ar%G%x@%@Ja0~9&1t1zqJEy;Lhfz zzkG3Vr#C?Fk#74eD$qI+Tug}TCiEM~@d9D24Xvm}Nw9g@#!9~w($U;Wp>*e`||+rE|l*yc;q>3c_A z;9ioOmyHy($T?q5%}P<-PxJeP)8O&#jV#FL@4$g3si%qI`*X`;!CDGSKPCUv?;&OR zhbmDECnwHNTuzojAy%lBGsR}sZ=Sc`YQ|A-v*!7OQo$%===9s^m#A|PFY6^4^iUCpufjOc{72L6lkVG3BiKaQ=Z^VNM&a5!Sh$@Zr| zx6}cTj{mdW2IWJwrb~0zrYcLSs_rTlAvt|ebI9X&`*>TSB2AyeL55omZ*doSzw4RX z<#2yC;PCLkIQ89v~U4Va^;K)v$EB8~|ybuO| z^kDO&oJ1w{95x(bJVY+IFmFd+gofpgTmk?9fAL9C8;LN7nJ35;F4;n&=uZX#ke0ih z&ZXz0R_aVNVJg*WhtxT)9)LWoCGd}V?_3R`#HPB%kEf^iYLkJ1?b1!CEg!{*76j0L z(Z1>pGmB}w=+d+0l{~ym4P9jI>{Q|7)v^Kng)*#11%^EU>gaCX31V+zmZ9W& zSx_h&H^`gmwC5{l@+og$+fr>~H0!3Kl5t(1q(;oG-f1j-KE)8y8Ns3;#oKjLk(Pwy z@2_j5>54c_fL(zwvwcRtFVMezpL57?ar>S?0c@3LYctO3bUE< z+_ps24xms2s{Db=V_^`w>7NTh1hovoXT-NEXHe*~R5vDC(b_)@i2Lg@<+>FW*5cUi z)_8<$q5lW!O0158zG$o46N3aiajV_amCiJOSB{uS^dU?sRp1AHDw0>LGE`KAFEvF- zPJEH`VoV%Mv5%siyJC*f?+l&=foBuOGH-Q&A}3?j8`+;lbX?qmY%*9xUJ$psB_7!$ zk5pk4eGRkegWrFoTLtkX;So!T0PkWyd+Uy`kB%FU_x{*f!KJ{-t7MIRhs$h_$_Nwd zwxJc+XGKYliY3R-a$2eY5OJ2hEYI0|v8FSt3%o(C__TFLTq#PGN!17Svf=ww)UOks zM8^SrzeL9-N=_x9BqJgeGQU&m9<3pI(;lS7T3=)RE_<;)iB^FkhIGb`&^JBWZ& zyd-pJ%U!%{d?0L&erb$t5h0i4$$E%B6od?wxZz(iQp0SHCV#wmcduK@Oh=QhH#l%Xhd|8TnG{QKR1&=AAsu9y)fMj!(t?rd z$_IKTO()W~uWp+j|*xR@nEPY2I0yUF87ex zu^3Ur#o0}!9WZAW6AplW8e6t-sKU0?VHttjjf&eO7~c2u(l%W&HWX(G@7AJ zxrXWO6|j&PblHsk=5p^T<8z^GJ4&<7j~fTg?;E96>p$r*{`94(Yt2tzwbRVEMLuBt zjp*S>Kbomelz;}BU&5wC*cS7%6X!#;bF!1u3TM1^OWwX347TizVOyzPcj+%pv+9gk z4WI6Ss_r0un;Scbew0%h5q~ux0$qL+p1D&?Rc7mqTVNg;ZL@ZW^-B|jsvNO`B~hIgZkq$bhQcAJ{if|0-ytpCHovsNqAYw4V+?835rVI?7E- z)UW;f=L!20&+V>-XM{fmDFuE%+6D7Ir2a%F;is(MK)!D!Q@DGnBL*mL6#jwnSwe*K zBLmNK#16jjTWldZ|9v-`1@1fDMb%-VR?#f3X$$ z!I@?O=B11#4X2?5CekC(PDax4aO%E4OV21@#80__`@b}RBk51uOXZ(!MLKKzmzk){ z@)-C6{lLyC^6~f7$aLB>Md}1c;9tn%XnLhfZ$(P~)m;aXHMHeC|E0xbp4b00W|`cM z&q(XP^cVF4r|s1(&G;{4e5YIV&$3A8OUTm~{BJ`PWA~>q_w*R$3N3E_rvogE z{?o<4_x_(hE5UM_KOW0toQ#VP1z$^F1zz`Gvg0e|0Uqj7C_4$)pMpFw7wlL{uTb{^ zFCPXz-MZWWxaD|#*uC`huc}0%PZKUR{Ku^LE>%L_hm42vrZ)~lNfH3v6v>^d37#Kr zJ!Ap=qDhwD)X4pN&!Ok>HzvDwZ`I3FBYx?brYDVL84t`fqlyo_P>4_wWOUMR6KQ!bc=` z61e~3sk|onUyt~mJti{S@q4z8MVX3>uBy`U>@dEes{ZHwRps(k)ETRdMfLgKHsrKUiy3F)a+1lD?R z{#8(YIbE=0FLohE=VOcB?9Y2QY^Q1G;9L0%p)h9|7q_q5j7tbhbkp+%x)DdS*9NDd z@_ugzwwE>?=B+V4@6L|(TC}>d;9q^034vF@OlAu}_rA}O@sT`4glQrON47r-@O`$U zgfL;r_{6J$o_LV+2Y)r>|L@ zVtfxL-q2*9b_jkCky5?1>-s@L$y#FousSO!cEqF9*3LqkKJ2~YO$vKhU1TzPQ1HND z@xZvg#ZHcLmQ3)vb1wIO2vJkX&AREWi4fpgJYf2>ydj4__<$mbJHBwJKG&{IRC7RT zba(qCkEt*;N2b^I7bEezrbX6_Q5*U;uLfF4;JyY|&|#X`TCI50fcNvX9nRZAUEPAw zBcmZ!{zPw3wJVo1E}~&jWMH%TJh?@ac?6H&urE&8oDU2iSWN?}C6OC|l1~%zIf8^L z6M9UO4m`nHW7)h*TC7Jm!(t_5+ zf>KiYpYRIbgM$zo3+&s^q8~TT`{}JSQ6rUyK`@+V?&M5(&0{nJjG;KoAs&WHmJ#0*R(|bj zW{-$K|17xyy`R=xK*hf`^Yjqji+4bYd4;pY-1#5_CePT!HA^)|JVo13ZF;@^sNc-~ z@GT1>hJIer1i#{a?@I!ElEIE?t^1WqtJbVWcwOFnfUkt$V(kDJ7B3X$zyQ~SptZ_; zl`Pdf+Uw#Q-7T~PAh8QCsMubBjPA1YQ;AGs=GbKd{$M!*|CC-RV9IXwXv91+bN$byG zuNfW;c3$Ml#-@M^@=Iw1#Vwf?1t08no6**Z2Txk4J5M-hYCFQr*RM%I5x=R9uiBuc zpEC7YVbA%TTN9)m=N8 zG8)0UE=1s0O-64LqqtI0z}aIY3`Znr83c};Vm$N*KDX_fhCP}iwg1t zVE4VoT@Xt`2CPz1>^j!c1ShVX$}~c?jz!Qcy3@CN&Wb%`13!W_JfE#Bta!7D)H|~dCKTF9`&(Dgj#{ss^Zh@!FOWxvX}Aw z`}t^4LC<#6)tD`u!*#}yx1=C9CYP0R`;(@AGB%06T)zGSGW*`f6(j4e>%R$I9NX00 zPj{ZID?$gg#VsEUj^(tLUmprjqq-#~U}8VpxNUNjAO;!W-}F{vZn)GV?m>EV>TQ_Y zeBFC42TirkF2m3kI@kB+jiv7N`&mQ05Eb*n9^0*NFmJYpWOx=@X1(aluaY4ZV>jrl zBTPs&Y=c8FzF!7^v5G$)v$Izb{*b&{Y;@Awif9_jIH3C{Bi{gN-o%yH)DnIzvxqNwCW4lag#9aye%|61hax`A3il*< z|6~*qkDds8dwj9}Z=qj>1`pFxzRfK3mP(_U3+cE1 z=s@nBC+pt58VbbEMA7eE_VfK)FU==|Myyj4VB`|1Q+au8s%5n{K;N5InN#$QUq70l zn5Y$5n*Bbw-cCNaee?SA_`V^eGKv@OA6bB{sVhe>S&>n!FBX2>kQLuo=-KCTa&I|3 zTl;P1mf^RLrev9PlDRQyr+#3(*@mKIf#jLS<3f|Fi`?PG*-Nq$%d*3std(X=EU&7p zWvUo=n$ipN7g~KwrZPj5 zM)a1+d^_t^rj?vT|tw@n-RYEntXH{o;8w^g-8@*@w8 zI>zx^B$1^xb+32FDn?$qQ2jU>wd@yxB0hz>@CZh`Fe-hD_xNo7$%FDr4s z+O?r19G~&)ol+`4pLjkE{D-eC@`5Ph1Exf4*8owN|_R!jekEDk2BI3xnK!Y?_S2yn zQ)4r$i(psMot`wTe{xW)C0~WoEeP!>vX^0NtoS%CJ9${%U@6_3tDrTg&ZI%?5Us0t zN!i}Ee(7v5e;uD@Zib%Z)f+3@J1;_gZeg6z68#xc|5hU7*LdW;{+ z5|*5wznH^7I2b9$js)hHs-d+r&HE_Z&?^__k=$V&&CpT%SJSybWrfa|45nfge&)l$ zmv+~KLXfzE>j^FIRqM0LT9sx8x2CHb1w`NjUvp@3ge3Qt-j?f}nLm-2j}!e9xqda| zB%@{4t2mB+_SM_{>g(t$SfP1hqdI+WH7i{};yLeKW_wU@aYgqyQ&NqM+tdE8cVNKE zTz9EB4XLweu){F#arOoLs#YhRGFU|ab*%UfvP^Bli{LN&d4qYP?>)2ihHJt>Ezazi z2!57*b96-&-pk$Ohh=Z}2r#E;vV_;yZ<|iqHDz{tC4Pr^RRuQ@e%?{P_s)bLIbf+| zV4E=5fC>}v~kEa6XQ--mxCScMb+<#;t&*4&}E7g`4=gqnl?5* zCh@teLYoW}AvSG3V#R~}FRcdP`HOx@)A$=-IjJRjjR6p$NMW~}X~c!NdGzi`1o}wS zz;o-ih?W5|fo6{WlgH)=R$Nbo$yZ5Y09)*I+EdWV7Id$Ca~a1-fIn&)YtL_0G-4Wv zAsmu3jf5b_xe(1ctHDArQt!#=G78lPrC@09@ayP%61{p8PjsJeW4sa^2`vx4bO<1} zAeL)CR_z{$f}_CL5#DqYyX_0vUlM=qC$B9p>x`XrTFiL9h^z|RzhbROxX=*4K3f}m zK1gbN9DEh6squSX<>{KBB|L7z_3Sm+|MQP`IWjF8MVlw5SA#b5sP5vbmdZ?EU{geJ z(bPkqOz98yZ?uFqq~_;U zG@+Yh?Z<{1zO~^dm8rQ?t!tG@5*sRkg3OM0%o#4})CQQy=i=Yo-P7-|^g zKjb!z2HTpGCJ^=SP28r6G7wE~Okz**3w|w*yq^?UK5hy3ytYvhM!@p?+OQYgLJe0{ z!}9Hlm8nGf*KroB>$gYbKQ!gu*H(LGZ%|xOkdc;}-Nl%cpLP2_@0rD%nf#=sE*7X?&rD!_UgJxjxyItx#qz3ww=Zi&fu^T=)1RYV~*U^l2$V!=0n|X@X2io~!Gt zdm0VNjqx3|KgL*ElwPu%qxMRuxh$@)T&G7gry%Zr>b^dS5ZTiXo_8oQENN5h+wme{ zg8*6EI`hcndI52B^U%~K5WPu!<44SCwITmc!Z!KZJT53B!O@%Q+$hn}QKM7{;P4KJ zLYKh?XuIX02$z>z%IfBowTWxkBmXa_8xNqU!*_SL({~-_OMY*!DPDaTb6bw`eSF|i zEqy(23FVkrqN6CfipteQIF>fT*yXfZ(aT1$v^i{@q!K%5t|iF4_D%tXNxr9aSWcO< zzP#GAWUW=*4$GP#I;V?8_jDg2#D{)xWToRf?a%VIIl%rZ%hxmNISh0U{Y%y<1E&|_ zE!VJD_&FOM?GF8#kR#}pI0d}fbP0379W+bkx^V@)y(*gqgUx&9dblixOn=6!3qQH4 zv0z?k*n3iQa=|)5lPE)=sZ7ZlLdfd-DDF^|(uo(YOMw{+E3Y`o)zb)ie(8~f?|R=a zT^^8P;m=$So5n4Fy%(gF9P0Bdo`HGiNJ{*}!RP8}hx`e7>-X5uZH!Oins!Z@%M9^mw9>-;01?;TEjhQiThy1ugGiTMh!cR1A3CelO-8?^EBs_JuKwd|_ z3gx>t`JF*Lum+m9+L8w3_|qZ@DQp>=QB*z4>_{$*K*alDphm;tEMbg>|QWw&&BGHktQ>7)*^DlD(omEg?7IP#STz}+wPn!YW?bA}yf zM@O79$@kbjj?NNw*dk4`@rr#eI5t^P2?Ja(G?zmnEF<{h8~i z-JM-IK`Oup3-e?x1^ml}X#{52B4pLHj?%RbXg#KAnX~dc7D^uPi`h5|(xBl(_r36X z{VDr>sP;ZjlX7yC9hVg;+%zlW>|1lsqE-CM`SD%8jYYfR13zyogaMpS{kwvW>C;)J zC7WsSdH=FtQjY>J`FX`Tmu<}A(*m2kQ-c3qE__#HJMIauyo4Gpns#Y-t>vJm@5H9K zeUNF?%QlU5(ih^5#4^j^1@3uo(V;J0M;q6O_Td$~@$2|-tqK42-78-6(r@4Tz|3tX zhOo-4R(_hpbvv8(bCZIr#_UN4r{Cts+O3_+*6Lyw=qc}2 zZO^d!U43ly$D4KbH&x$4lFdT9QJHBDa2mmvn%0{uEj91R$!o1KHNV<4a+YW9mxGhG zM&le}d*E`9h${yZr^xcN0KDo33*~@>T^=!Q$A4G)Nt0zqwPWgGs6yX10OcP2>CUwNe3ph^4GduRZ;X0fi0r$J> z2I8nM?bW*7Kql!Qj~kCQhfua3+bB`8C@4qIqd=cunm7$yopE?Vm*9#Wlt%st#z@Z{ zI~Xf{+Qdzih2q}ML+W=;b>R^c_Q)#~GdXvUBr$*qBz^WXrl zRR68<)_B~?abbgvIXZ?=UNF;>OC>55;X}iJEyt0fGd9tofT<qO1PMUiD*wArP%Jr^`T*u(l8~Hi@S13cSMp zaj)!Kf`d=>hayoFt^m8g-@w(C+*;sU}@HUj?PU7y5Swx++m zWFFxeG)&$2;CNgX;V|Lmf80UbnJR=Yfee1eG$wR%UP`WD8sVUd(&XU3DNmc-wE4&2L}Iq(`@%g&>BFms3>FeCo;BDX4sy{nrWP;?Lar`I#S?xYz0g`)F(0s-MMN zls+E3{u}C>@EtQ-jyQ=Fy=U(SiPPZn@k<7EekQjhoF)rWyMjr@ZWlru2|eeu7x4 zxb4Y>fQtYknCu23qJ`5QK+}I)mG&kfiV=Z0-JFu<7mvWxd& zZ=8PaC!y=E)HaB1i#X*Y@&QDdNUg4f|0)^VZOLAb~%trzx{#n z;2M)PUsd0is?od2eFyop)yWQHQ@mGC(%6vQ{||TX8P?>suK&(y!A238^d{1~0RibM z2qH}oLArtoH6T?w3JNH_h92piNbfzr%v{a<_S>DuSRIoEa0 zcP`^K#yiUMd+z%l8a;a+#;_}!E^T)(p*E-k*rB)0o$A$(^m_siO(U8c!`Eb}()c!e zo`64vuXI@6m(8mS&sw4qsnH<}O{*q)q;3wyK4`m}aL22{nUbwwyg|3)47?zBBvIq| z80))92|psIoe_7jI&f-U;yIWt55a=*r?$^i@>}yp{KBb)A2q9WE1vnJ4)H8V;@Dy$ zWxZ67gbOZRO}~!@LD=fmnVo%16WTFBRa($t!F3ZjO@`#%ir4bRyVKd!U zeDJ-Vd`DrT4!&sMT`?PEC(t5#VFC^Fl-|??voMEIvwD`eu)9TGD!b_TLH~{R!TN>F$0s%z1v1eneCl1$o7=l684>((H0$W2~Hbks~Og~@<#uYx*BJ)<9p4Bl>m~h z*J05{%u+}?GQ(={Rjp{1jnzEkS@Ae(_CtBmWo$9Br z*v%APLwAD~Z_qdX!EX8kQUI!S0aa`_|A4xfY=uv$#?%-hU;DYJrF3|u3KvCs5+gR+ z$!bBFp{2i*a=K2r1A;A;Q|vmMK%!3!w$+1y#Vf>S!Z~sBiDP+2=)=us%5AgH`4l$V zj0%TWNI93^KIF0u6NDB9yJSIX7|rti#(@GOXf`a3p9uj z?6EPIqSLGb8NVPoc*&jwUEO4E3z3))s1w~KZ{Y$zh}h_67^v`Zl;j~=9wa8gGR4AI z?#mx~t7IdWSrXL94MjMK%NsJsvoX0PNQ0WmDNY8?iFE^Pi&UmnG-8uijoDi(`43yEALeX>uV zB$drXY#kBz%jEZF*Jx-&{e&L-y7^R_=z((&NFk2O{;Vtw$XVI&HM2CBMEYVjzRM|8ozaEn)~uvpMQ5lFkC7BXbv%>v!n(f{j6lPeL_rp}zmfhUY_4c9J;N zrLb{k?~)XT6e?Yp&NmH|3e7eytPT|xk4mrEz)wZEsiM%LSJ6$IQ<_!9ABZBpTK-tI zOW5cxCup~}=}^J0M{~(k!f2`yJbNpY!2RyG>p&SA?60@9we}{Jr5nsJKy{!_X0Szx zU&24hD?46w2M_kL=!m~Jr||y$hcCfCP(tZ>^UYEa*m;X?m)P6g-`9O)9vSTFy~62b zH_c=uGU=J?N+WD%sP4#$zvWK1N{WtLse_ zuJ=i1YC8P5LGmDeo%ojY1> zgcDnH7;e2Rs1;2uj5f=7_eOGn2?&T~j}^2zz_Aas&xI}*eEE(<+_!BFNnkFSv6n++QhkxDI(Xilu#mX2YR+@-GVOr;yCU^eU%0k3R@i>*Y*nX&`KF%oq&tf3Svgrrj^&O> zUif(^mI_*~5VlXti6vsriVP5C9N58@(o->Hyjm0UHlZi?^oyiG>-Ts0@<6>0)sSgo zxBUFE1u`-0QUBU7&l_X?BPqbNUBy*r63SE8XWC9xx`@o%D&G4l`S|CeS@j8rLSlbe5DxSL$hOX?CEefL(s=~c57|1#H%Bgo?Rcl-}U|~KPL@Ywx5sRWca%R zl)vQ*CeHesh}yQVa`>b9rFOrkg!DGSA=;_X&hyu1f#Bw-!*hNWkAdk*l`b>+1V?ZUr+8WO^x3r7a)Jq}=r9X<%0}ZSgHh z@#>ZaV|FR4IZdJ6aDAAP0S8+*`BkAG?!?YVe%XNQ%6H>NQ0MLr$Lk# z!OD5VJ+!8q{%SC7_KbDM(E|DwVPj3%Ob5Z{yPwTIANzcok$Igea=LR(F%8u^pK?{> zQM_43TbMO8{gQa*Nb#nwcqRhJINRxsYq3E43fDX0b`t)qdFh!$P8mg4c0Yz<+s!!h5cK)5(6*+GSwskx}Pl{RT5e zcj48j`lH!vOYH$OM?(yBg7h7hj3a$yXrxRtLQ<19h?GyNd}#+b75|qeK{A zy=YTqH2j^U3#;>}_AU9P1Nu2rLQnJ)6?XN$g?js));DnHeQNI>Jo}OhCJc{O<)VPC_;nqkh!DLn;@ZlQqIw~YWMztfPdMGCmW~|CMu!&%RqBXp0W_{!D zM&?d;ghVurDur3&OBYC2!(nR4DeE>4DIYoTY@@VOMTF*ikBp&l#A;}Z1c!|_p_a{t zRs7J6OZ;Srg{D<}RDQMb(+HQZ`LO7f3qQL{GYZI1GjHFgWuu?QCzN~4jh0Di4c8%2 z{Ahb!wPv07YM$GBrrjEn;+drJEXiF9#k9fi>iEe8Uo+L<#%5u^;|DoQ9)98^t;U#O z^L{J#jm(nZ)TC)Nngbk{!h(6J;S;9|ru+kz)LG!x5rRa7uYO?}h{4s|RNW_cRu+}i z*|FEn5Sl&l)M4qTUjx4n=RYz@4CYQ*snC7t069EsJkIDOWvyKdxW6ukq?Q*S)EEWnUvbs zZHaYGdMG3{O#%OdNq2Vp(uNcew`t{2x0H>{5b7>seh^1Z3P>MhK`+3DtP`57GNESd+fJ?Z$)c78|UT)bO&}fI7qP9vYb)wS0hx z;mkYm^s%!naQACkU2;8TM_J=0E%bn!VDJ&qrp@RQCzITj%(*d%7jocpmV*td-@&_| zF^NxiJ)yHwoweT?6~r7?iF6nfb=?km_YgoAD&@I8&X(wMs?ifxDjJI$IhaP^>^k%; zmBnM`T86pvn%6zZG8i0m&Bs;9$1BD1E_tr6OUL(zRHIwYamuhJ&QwTM-bYWq*}a>% z?0sRY{k7@4Ra?l}XbO?i#j&%8tEv4aJ4)TL1*-0Hwr_eFJ(WvSA0hVpEt=u7Y%vZX z=|A6VtUx-oj_NdD<24?St`r)UsN|J;>#f4K(dxhb8u$o@YKpP2{ET2&P zk1DEyiyHWk&NOQuS+SE{+qA2|>o_E+ zx?dp4u)ca$>Z%o4Tvv_rP;>x{R%^e=r(dwfx^*X5uu{p4;pl{niEnH}sWh;!m1Xya zOdu!|TN|-m!nvQ!5Or9;Rmng84OQ;)8)kbKUQiA&p(O%YQhWD?a52NQ0&-kj6IS9D zu518(rY_=9S7Zqk zXx`_X9n0R=ECFT0=b2&4$0|{otfJI)ILd&uXlQMl?3&m}AHHnawRpgwV`jw1#?- z%iu&0ws`-cw8|aMijH2StVhb7qKAFVTb5ZMyk~RC?!!P>)h>P59e*}g=d={DwK4g= z&lmqhu@4ysuxEE32Ya7JF!hF#`IByudi_sj0W}WkqOD00tO-xXdRp^s15ob#<5D|NO`3{rI_ziRfLX9>>Wt=3(Oq6E&rUUJ*nCx2u;V}73IWuH1v zyTl#n;=HaJK}~uj3>((+Gxd-+g27SD?vB5bV&bx-$Ew=#%&R9@r%$NksfQw2TdM0*jnUnA zRa}jx;VCbXE8q_eN{7@up!fs%s>aKX#}N)&S>NBiQ(EF@& zNJ)O)li*uZ%kn&A9_Cp!fnpYM^o_;0zlBt@Tfa}wa(%#AZ+;~0DBIyXQ?kX(t~y9% z23LF{9zmkN6(gBhaw_au;z(6$sYZ-ZWR%m3R=bhrBM(6xIPo1zWC6Vruaw1&AP3oH z|2Ny;t)g&)E!afQjO@nCx&(9Gfh5ARYNz^-mu4zs{8+aDq3c;b?1)EMo@O>M>!htypg~lu%JyZ&&-x=qxsgu1i=E*Th_`*N5JBI zwW2xB7xDQS!XdY0%lF=Rjnq)j4b>j~0cMn-C0*MAsiH0BPUc5N>jv+5MiZ2jJZWM- zHYL^ImHs)Hzm~%(-y{F%wC~KUGr1hCxZ|LjaK|8YN`>nrhZ~*t{(cU>`hnfYe zSU-fN#Ezy_ALzq}C!9u6r$0m~N&xv}`f8u~CRwWjb2lHet@1UWz6y)ykLA|(qGhP3 zw&}huzesfSnRG9DzCR~`+0mK%{X@1(^z+-In}SymTR)ENN8PNyA2vGkaPi!6@)g2o z!h0@MB-qonhR`jvzg_h!n!QD^h{8B!N%l0^6=?v!3ma%G$J*?@ddY~;3ciNp5-q(F zdg;$5pU6ZAooSxGLQSk6>2P+y-0dXC>h>AFNN`2*1a7yAJ<}E??Nu7Hsoe1p1N4;M z{x7-2mD>B`%o#jYW-gxzo2TO)MZ@Fk9KSca$fZ-Rx($iB+lY(nBwcV0x8C}Ja2VWX ze5SJbtnF1Sp>Nm8DDU%^2vGvw4U}UZL8T8iA4Z#RgFV@YoZ%d9OP1>O5*6XuBAuGs z&QRx+J?;XkDJ1!`+_4Zk2^U3*nE|&&-^d+#<%a#72H*D!vTqkS4%Q%R3fK38S!CU` z@S6P`wd6m$fr{aud2x)`1f=VRn@|*KQ`St0!p!9d&D!$hfkBqG6*)b2Sky*!bO=Co4pj$G+L)I>FG;psI>b8 z0wYlJ04*G7!amE5KtJ#LO6p!>=T4JjWMY54zss(#Iu?O{etoOFbUAg?7`ZvY<>#pg zwHgV$zJ-n0+x6@qDlk-nO8@hS2Qv+PBMFi!-IPA*D~R$;T{SwbsM_z?9J)ijwCOAH zfcR5UA}nZ~l=xQI4)Eh%S~)VWznH`hS((ca9zGou5NUW2KRw94`>i_ImADV6U?wCg zbv3}I!PpsDEv&kB&|w|Wdv$C`PVkn_G3b9herP&#STwXBgnr;Q_^~sTdiFS8pn3hn z$Z)#!1DJN<@b!n{MOG>+Pax+@PFEizD0=*pGY=eHm`@BVZ}O|n;ZNBk`$r?x57CQ-2fLH+`&FzfF( zXi=rV$39Q!vR^h4X&}EK|2tk~pSEW|==1wOYxkCO#|}Xb{YxK;ed^>~EUl{&L zunuUVj%H7npVjYfNqs_p|1Yh!e=5J^%NzV3yHY=EkhC&`OsNO?#Xn!PHGTw_Dcrg` zE~27NY6J`zbOLyd3qLVI%enpn7JUZ=)4Q#QKjH&UI0-3kxocAV zwA^@c!RKG>d}IQqK=NOwAajmEIs>po6dLbJQ_{hKFY~SK`G4_@+yt8XO6LKWOQFKM zc3#`{3!f+c)sqqe2vyO~KiSrOUX$@n$`^U%dkf0TnV+ecAWy1NHR7%AH(WJIs%$ z1MSC*nN#Eg>*q6FMP88}RKcu|jc2y7$|Xrr2iNrWY-V`jsIt9wTtcH*Ld&sDSxYTL zS$ciuAxKI+uDN9g=?KVm98T&~TunvcTU(wj!kH>frkseBC1fhHd>d(0-mQ#M)4^97 zQPIJEF$v<9{I(!;oh>M{LcFj9P#cztF=kZ>gj*}1rp=28%U*$FYz?1BMOBI&Q=bsG zt!NyL=TuxnK83Bw4WtTv30q0l*GPSTMMy#C=f_C)uOA}-&>%qwavkEs%(6@M-;WyM zYMGScmip)t&)unv%~<8u3IW=$`>dJa-aUM=_e!O)&RQwDvp3ux`ySu*OcL%QYHf{9 zIsxL15SW|)ql+r8n{lPruuwwvQ~H@wQh*d;9A3?a2N$#V4YD0>1m%$-SV zrN@d=sinkvH0!dDt~@KSEJPOR7yfPPz{$Q&jvFlw*L+Om@=E`AkJ7^P^^ zmS%H=Z&n@RoxW%z`i^R6>mr$AlLM3HhA(I4YS&TyCEVz!bL>t8ZR9THPa@8c?M|*{ zN1!e5PiGEN%fcnPvE;q^&!Z29%=e^;dIt}24@aaFcs5iMaDGpZsKrqmj0wouQe4dT znA^$LdhJYPTOo`0SFsP`)H^LWFF7m8RB8E?s_@C%?HhKpE&fL?oHnx(I4w(lwL6p*pwc{>FHUdTUL$8t*bdh zA)n?kl9{A=23Kh*z+(T+uK(HeOA$e4XK$(9>ZiIe&zZ<+$~}AwDRxn8sL}5*B7GGD z1{5U7SeIL%0YE{*af>7`G`U-+Hd%&3+07#6dK)^9Yw<%ZR+@VMe(r%TJ@{?!G-tOt zH&cC4cj)gpMxl;ZbjW3Y^IL04!@1+__+98~j5q9!R^uuFUIuJCARZi6s;pvIux zGt7#NlYEd{nhMIZ!MHxMj;&-eu0=X0To07e9#`0N9ejX(1nS+hsCFjhkj$9Q&y=ca zJT#MZ{lF*njX3JYCZjV2%jo`t`CT|Fz+KCQS+$ql#35@xuOae!{GH4sLkE`#OCbcN zWvVu!sA+0>VjDMKy->Px*k`x5lkXMs9oo8Vyy$`{E%>@M3XukFYCCf~%^i9coWzo@cr7QZ(U z*h8@+(SPa(kYqkjil9!ZOhz@UTTW8>mX@QHt*0l~T~ZM3I;V0)lXNC<-I@Ve88bD; zx%2}2Fl$|GecR0E#bRwI#6u+mL#07zYxhU8us>Q`01#3r36}dk;#7w%q zito~!Et-@sA=jSu5hsqgZEN11F;~z1G6UhU$8jIjoHQc+QYM-~Pxi!y%8KqvsFL#5 zJY1%%G`*NX6dwwxPbU*iOqNp_<^cF1?=wSLRtw&^ODoVd%b-08*# z`o=VhO=l?Xtw$)Ul^z9|CLDY1B0hNH>SKe2bsZJ-^)2|js**lRW;PL4YQH>n0faA{ zr7ivL44lch=>kh7zYn+baVw0iqszghx2n5(-UszPpi(RC9-#iHQ&%UVCAwcJj=D*? zrHzOaq}|b?w@xs5Vi$jT_=>kCFua`>lihZ|-GkBoaC0d^$XhS*&1kmPw%N^uY>iGQ zQ%3PFRdfbPKS>t?Gh51CTA8A?pUkzY)iS?TSOZa^1E~(Dm!YLB`&UT}mNuJ^{#+Hy zxK(i4#0B!!l)A}h_R92C(~uY(deT*kP9#B(!q+O^Y2y%_FBU?VPt`p_ zu~Hp6fTb;GSLF_0xg-;`E(T$QGsjAP3&h=a4<&N5XA4~x&&;7H@stSkN}SxvDf#?0 zTf^DDVzwakhm#E^RrtZ2WkWKntSo&g*N&Mha{9T|$R2)G4n?|iUm*Owh?T|^pmA?s z@0~O0(ZMz(?n@3L4qi%UmN4}B7e=spLm_lP*R`HZY4T-X5i8a-0VG5

hLmfR2Wu z)i30UJYP zz)jIh&It-_KaSp~2{Z^9w7RDGa5ALYimWCgpX4R&@DF)XYAhTHOlyw6m?&7NqDL{p z+rH9NY6AOlTulp&E0ZEO0t6%qbHEx5dE2-tZGCXF5 zb@NIG0mr#Tr0?(tx)e#MI!8HAhdyJ^dC)-z;2O4Q++iG;37y_T8 zS12QD?kix;X6(X2`k`Sf8vQjfx>=EyQYAfqFb+_ofDU_-jl)EXlOCdnD4wt0uF7K= zVO;~aZ*TWH4?i#Zp;gXv6@L; z*TU_%D`6b5h~XYhLFTC;NH{3=QSZ`$Mk4gxmBpO;M}PGXTrWzEXs?EkFZb3~;?(bp zn9I@zrX(xr1|-%mRPNxC+$|W?e1(-X(5o#@3QGm=RVu-f^*o87CM!qlz{HxZ#_o;U zO!KR}Jn5y{=FJL7(D&od(1|}1T&V+nSVT=P)M%Rhfklal^4iN};5@KmqnUA6)Os^| zAInOpqBN_7y3l3=55iVD0vDRdMcxSAbs#=-n=(wh(YBwJeV6|(%dX6W0ew<2WnM^cIi010a zb1-zPLqP}eap9}{8!QX{DfDNI^IJ|~EA{e?ilCar0=3fRS1-i8QCn9bhyOu*6BorM zrThbjudOdt;bDtjYck#H7-#yrFiApAb66K`fkqkin9t*yr-!XSo^F>vh_&EQxdzH? z2TmrxV8jYc)b{CAifM8+gu-RHOMQOg6m&b}62`f4w@ys7AeNr`F*}!ZHL3i8r_Jm( ztFIs1MIKudAGy*wi|NGe7*u^M7L0OZO`1a4vg@c+N~K8An9Q0bm0rMlKQi558E|32 zw$iNC(?C#xmpIF?kleecpTexgnbUg0oaEd?GmyY&1J>xY$_Y@0Rn{aR{;F%oLL>1k zc>VUQv+{4D1cEoaifh#k3BUvn{ik~UPZK?W;_M&{UBwEZJKWgP6BB>y_h)pxE79p3 z*1$QtshS8*jJk)pJ$Z(n}b@Td3SpS1C zYs1kt-`jUfA_bRYG0!k+J$e}atYKMfqe%O5HU4?OqZ!`CtJ!%S+7&4*{?cW>UR2YZ z>NyWY1DF8|OJ91d)q{iHRE8a?k-DQ%xGpi|XY@yF-h|Hx)k_P+QB-}h-F@_F+c-9nPyJd`zyU zcckY{mAY&s6N7R3a{PCL;p8-BV|JMg2)1|@zp%$F#|R{T{8sp{!4B`ZoP@;J7M7>4 z^)dbo^^m9STCVglv*g*2U^PJ;E4{D?y$I}@*8SsW5L9P z%iUL`Slu8#DWyBaw|r^W45{L|vA4sAV(&OY0op>Tvj3DeY-R0Nmy_e?Kdn+L>tVLI zrxKYN5993;J66?_Yc5XYAl^x59zx&kOkuTeXTk#}L@pj6<`s?<;_EQXO5IHFx_@;7 z_#Ag%uQKOi^8S3y4?|4iTDvU{siCImi?yT)5qz52gFQdItzB&Bhh1?#v^rnBa&tMZ+-imG*r0{UfR&T_h+7&pIjxN2qx zXTL<;U-OA1rV;Z&1=NXGw0j5k`Hz0Eni?n^j@>=dWE9WE?XJmb&ho9tzEp@T3HF>F z5@5@Ytgr}L+sKYYJ863FT76{NCxp`!`V)@j4ik|Jd+~}isd;0^@$-+Grw81=$0pAc zC?B)7=l8BSgM7DGgE9EHJ9>RmH5JAfTk{a=G~|no(8$R`_GJ51JtB*&6%<8U=zYB; znS1)NM^P92jX@@3O#md@abPz$X1(aE*}IL@JONHi*hcCTe~c4*)$YNNj}}SZTjI7~ z3T6$A9WLjB?o?hTtuYUjZw5u2kC<=S8z?hJ!KufceC;>Jz4o+ zO~d-@71sKQoM%UOsqI2zA)4+!`E$(mJ3JNJ@@K>wI+*MUm9Vn^BrD2NKgr6Ejz7st zoTc(Lle~eHlbUp2p`Qi_HB3 zY`DK>uJV+V%$O{~7;7UY5TMMQUJKhd=2yEHTs7J{91}1Fn|X$tvyBX~duxufA$@K8 z@>_XOWk*i8x&pMi(l~Jag0e3tFpl`%JfIz_{bBuLJ2|PNBP9M3hntU9kVne6V{5#k zkB($#?4Ef#Hc;T;z9>d=t+tP|~t25h|3^d6z{K%?b zY7nnxIv~5~8}3BPbxjT7n2pi#1^dg;5 zhd12PDIi}76{1#dT2Fgk(Q%GW_K8)6^Y5a&IK~wnu`Oq%^9Kh__0vP$zwL)hR+yV!nhi-GfW$xBl7xNl4TvljljXt`_-MUzHU zv)6>!t*iy+0jdyPFa3O*t#t}{Z_nTeYUwv(;;=IK$iBV}EV)frBp= z-HBUHJ9d4r#L%DcWHqi7A(?4OG@I+2ya!jDX0vonfGT5*cLb?~j~!QG#i=wAvyIZK zgSq5zM`$0@9;xir!H+RvQ+5Z{-~#P>4Nfnlt1(@Xxi2RH_ub$U@}=x|6iiyNvG^et6FGG8-yCGE z`a}0KH>opQCP(dcEVu2ik+=2Pe+fTx@dTO2{D+h}Bm`Ej{d9iy8$e1{uC7UQ*8NGXl$Rgncqa0$-Cn)?-0gh*hDG(859_FCN9EZ8@t2;4M2cssitWlB zo-c0pxI0U$HE>keRQ=9J4%Y(|V;k~v{XlsdPyu_`j_8sSqPlp(FjX>Hf^oB-^=FvT z&J70~WTW<0vUI!fLNJeh&ejP%Rk8~Q1cCNTzXnShFxNGGN4?SN8(+=_i32OER)Wh2 zBWEmi_Gzufez*7Xw`HWnD$?>qg8UEfq&~qlP4()B)vPg2%vdFiu!%NT$9379Pf>K? zI;$44b0&1i>)-fypZ~|M`CgVdIM$rwF?^)MAcPu91QV?MRj>$a6S!kh$^ex4-XRbd*=dcoTS zF45wj0^3;dYd|oYE%Dz;N$$v_O_NvyY27~+wif_}Et0k|s)UqWm;x_tQ>aeOoA#Mu z%NKjvXc{puVg-?v1!R9-+4H$Zkg%27V~GR4+gzFrteo|)W??J5&Ebno1Mba_g?vg5 zEg=`N2WyhsyOhvx5ll+6a&mn4?UU&Q6`H0x2xIP_uFV35y^(~y4?QtI2U%kOT>@Cy zamsK?YZg%XR*!dJ{ZvQ|fK9-^Ab-_AVli$8hSzvYZQAsAH}#)61?=f!@1zq)M0fWz zzj*7#B`bw-V64aK7_`zk60^g+*RSIXq#bTZ*i#LvBqnA0`SN$}8ErrHoDAVPaPfKy zOO=e1%i1K}{P4CbUTA6m`(5@uy$75VVrf^yR$O0rsT*Cz++5m_1)feRU4g9_lEIQ} z3?0!X3f3u-i(K(vze}V&emlt{`T^H)oxw_z*Ohf^%_Rr-R5vMVm}P>z`l$;;h19CX z;Rl8RO$Z&{BPzKSeXM1#Y&AEJ)z100th4K)&UnV1I1v)DL4@9)!Utuy#c3t!R8tEB zGEmsZGHFBtp30#3II0mdqeP4Q9b;M0^Eiof*$1H&xF>(qd>a3beLl8bG(t!$lXIf{ zIcY6#={qUAa&>)K59X8@xC(U49F6-%!^M}8YDbl z{$0ft`u8POnVpYEd00YOkuobPWS8JLYh4D^<4y~UUZ>aE*{)ZGg5v%h=W$WNKa}zY z7T(cjJxfsoph%Lm*-JCAySyk#m2?SCql6xN<7N^4s{!t5<@YC~JI^FVPxo9}6oRNt zYl4rg%vA1n9i)GUD~GN9Y7)hZ-q=c02T2e)k3<>Oq@o2DUET? zAqd{sfc7@n*^AZ_-TNBgG&`f}oxX;X9X^kck&LHrANgK2`%{;fj|`wD*#jc-zhxpm z3?o|xK}%DdyMxT+;oGxww|8b|dWOQfgvk1?m?5y5vpf^|y=dh>ufy$^*G4s&MuTF} z^?y`6UX$%XtxD%wnJEI98xB1ka%aR+0b@NFRA8ee5q&wKdUoC&eg*PYA%a@Chl$cz zzWr+oFJ|S2(nJacw3_iI6Ch2h&mcGP`yRN#i=CrAf7m@<Pz z6^(@?m|rIE@{CfLL?l$D4Rx%`VPi3>YuO4?5!9^Mg!UB!OJ%A(r$KDo6?xR+C5&M+ zm%4Sa z>w3R;^_h<%!B}jpX{LC|S-w1hoHzCQa$`MiGUiM$g;|^b0VTraCLt6GgxXjc-focmd>WS{XKC^G}$wnlAq?{Vw}TPb{y`pX>W5Z&pQ$cW`G?Wa0% z<>w2fyR5{2RcIcW3rpTh&rPPfk)!aje`XH)c_GH6@u9D65j?+Z0V%9>*RgB(i&VCw zJu8pm+1(4{zpTP*?{3N&+6VlUZp26X@C0_#{2u|5|7QRv*GJ1FUtYzF=CAvH#-CcZ z#9++ubi7)v-TVKXw4D2YzZ5#J0L-Y;l?&eW!sb_d;9s8r?Q^DI-2`r`>j=~9 zndXVojC?58&fZW~iuB+u?n~pq!J?A)lv%w$!4yH@5CZajTtv2EwfzX~l8mb97t@-gi@mj=zSk*8BLC7lfyQ4Pz)gS9y ze%JS`Qtt=^w=6*H_Y#%m%@*sLBWPg~=hOp#0gTSS;MmRC7mKAg6|G3>SbF{JLXajtHv*+wJ~;FS4s5Fvfxm!2*h*#kSE?TYCRn0m=00>J z6?qy}ZuIMkdHc7xrquO+z%|`QegBMWE~x*r@Wt>K!WW*g|DEvVzi`d2#>)Obcn$gH z-}4$Z*njaF?AA!}G;n-r5LIx(5h;81+Mec5Pu?_0YZpT{XMZ#ZNWLQNW0l9X_X#4< zz8=qc2ByJ9h^OHj#Vr`o@2|X)B1B3g+2Zo?SRe~V)617AG2QiY#Sn8abqbnM;HA2F z%(A}&QWed2nPVEDFoA^OL=Mo)BcKJ7lvD9#Zh!rh0#DXfy4F0JHZK$ZF0YxRevKc5 zsk|J`r7=r|c3e*p<_fMNpk{0M;F@^m5Pk6;2K&e^?nX?Oy=BXeZy*g#7S6J>t$xSA)KgziB%g|3#y-%yeAE za7gF0nZc|Pi0ZBfdVon^-I$sR&@xn|>cOSzYvo3FP#eAGL%Fo#`TiQgX_$f6py5m^_M9$etLIgd2fl zKGGjF_V$N=H_%pR1v-@~{sL|&4>J^!A&6T~} zc|@MUf#!_KwZTK??(MM9gR#@O`mio91$TDTgUwG7F#NY*^6I$={2c@$=y%_lME*^9 z%VfRpw`>4s?Y@0SbkIAh@=^FWj;kI}_ZDZf7r(=1Vz)#np3FjML)`UECuiVOrATcz zQWbEe3#t>IhLzO&6wOs=OT9RRWVDz?d>y>`_d7TCH<0Z*_nI6t`t5<{LwVz2$)sNL zCyx4v)A2g1%RKxv&h`c!d?`&?R%zv4#f+a9nZSXnfg536(mtQg`kWd68O_)(6#+eb z(wfY%%AG9)_tWQFb#~pU_9F#s9Uh(}$*Qw(8e2oZw-27WbQYUP&rCrjQVgo2gD-p= zzI;YfD3ELYpA;_2R>ky%jFu8K3y5@NA*{VKl2KaJ>oCvp``X>Yd4=`wmf5}dDagn} z0`_M*!J%Kn&v1faw}x*x&lCsf%#m*7e@U(oLP+Nhj^z8Cn&LaR=v- z?;3+=^M%_z=sL-|ccG;Pu0F*;(n*qyJZfXsBP##h(PZ11_D&g7HEj)N7YM#|Q01f> z4;zytjb_Z3c7~+athKr^B>1|pmcI;Y0IlH1m@8}z<>9RvR~&FQRkMf8Yqy7_0t^Rx zttuIP4bnJ)=!r}Gr&X=wX?l~#WgPH}UW$uXg(!Z}m!P3voW znkneW5SgknPTMMqKNBTz-3(v_3(1X|?l`gTeK9GwdwF2x>Q%<%Ay~*7)ujqhXdbTBWo63NSHCtbm+7x%!YI;5^1~@{SIHNbO~5Fvi^=&&?v^#z>#l z@wzgUj}aDo`2C@u`{_bZY(4swlP#s9Y9%eNr@Z%g!a~v^`Jya0M8oP^r0{Zor8-+K z1E<$oCbCbQKz~I8bH7S+%v0Fl-yCTS9}R^%!ePcoS9J)*k_JF)*BkO07czl>IZFPC zC(Faj2W0EbYJ`-!#JW%BYf;r)|PPO*$ImyRold|xBzVgj2#RlFUA%DYqw3snH=9W6lu<2C_IKeQe{{m;o z0&SAHIQgEkGLIPfAb9|F^v4he^ZQ{T3|B0!N8h}4|J7$krn)U@H%y6fL22n)M28j( zPQ}tuTJLt2>g_Q5V^S(lMj@?rjdgj*#M*qn(&e4DZiY#B@AIaXv*7pE{1u~XhgIo+ zZ1JkP`pE@%Bv5Bc#H)%0HCtZ&p#rP&=e^;TH^Yr*my!-$N;lFor-Ra4t?zy|7Z@ET8 zzAY3sSqpV(7E$?#IdNas>4Pt6UKm*9Bjfh}G^?HDGkGx~m(pW(s%@O_B%7C=c?UC} zVo=*!L+{JF0B;+TT1^a1?Z2OO{IByLr9+(P}<(Xg#j&89mg4^=&AZ1jsJ)1C8A1`qp>;P zX6)$A_20bM5N&@tBond0d)fxMzPJCuzl6T8S)PsC5TUk45X}CVk%lm0-ZThe zDOb@7Nw9lb15Gu;`0-S|k4vFiWfUlW64i5M;C`xObF6fnr|Gl={Bp1P^uT*-k`Iu2 z9$iNRhhAXsHc1s&!F^8fHGFyFl44Tl#7@AM)3;GmFon`?f}w}?-jZ0E-gkK6bHpy} zYQvAV{n4k(=4LjE0``?Y%mt4e@{WoohOLlT!r(q@##zcBJa=EGeeOM0n!~8O5-bmG z+9l~wx@#&sELeE)@0CAr8gu8FenKe!WCzr=MS%I0=F~zRhvrqgYNR^OLI~s!=_#h-k zEMxxS>cSmYk0mM^%A5^(`J@7A0mq&8~Mij1Xza)%Jd_ph>et zd1&6-=FxiK?KAhK=!2HFenWg?p37kL*(Y`bHt&~DwcBu17apD#gdGU@cZtW|P+}%Z zI+{1;*Siaz^679Zzo{zlC28RVSK)H-#@}?|FiHBYM7S9{UFJb09xYFk0WO{tI~wBg zpzo;McQWf>*tKQ1o_v#|m7-X+V@~SAzY=!yAqrwGG&q#cp2uyZ{q}!#_nuKrZSS|R z=cq@DM}kOE)CejF3Mfi1Q3R111*Joz2?zl(6e&rL6b&Fv5Tr&!lP=PvB^G*%^cIM; zgdPz>Nl0=x=a%sf-=F^Phdb_g$D6^YjEu1N%35UIUj$m3og z&4#ryduF{Y3k%@Nou6^#-{cpOgB!+g1%AHle`8-lKTUA^hCDuba!WG=1O&$1`XlIg z{~x=tCigB5tGzJ>5x?vu>w_;T>CAjy92pSO$Mw3mNA-J^x6UQ_4e z3kF}?4QzS)kH(>ef0dz&B1;*!B!B)X|IK;%E3`pi;Xm9eFs$7yDxsTIC&o$mfUzI0 zzGw@rE%8CpdENO7vrah6985!P82@wo$xk=%q(Is3!+^}Odm7eS}>@!_jRJ zkDHMD$JpDK(RwOJ9jC~A@-74_#gzIjqUdx=sgpsLS(We$E1|Tpe%r;U{yG81Sz;+mi0Mr!(>Sf<|9N;~gpVs)(?Q&&p*7i{SPXu% z?f|yoEiGiO%B{9NRZq}p2yRU(wiDKQB)B|kf?6w` z`VsL?#<#?YdA7)w?)-VUuWaG?+&Aao^AM(Hu?sdI6Cu8+gD07%L^hJOFtVc9swW_g zI1n{xb!~5FKC)XLK+D6#Jm3p zP(&2?7R-_;2a>b(zA~A#40pRt-|Rexz1Srucg13$wsGw%?@}>Efg_3cO1Dq4j?POl!GcR@XrE?F1@Obw zCIfFYzmuF!(~!iL>VY-e5fUL!1=`ezJ9?ygkcKaDfoIu+M>(we$~Zl)3-;&<=A*O2to;!Aau&c20yi^Kz3ZHx0xp+RvY&1t zu+_5|h>_}c9>Q88*ex(TTGGxi1MN)LR~o|VUQ)p5Uc;DeWG}TNYw{ctlDH*d#Fm7D zQ9|^PtWjSarat-=#hz2VZW|6>9)ont6ge{L#Sf=uzA*KO9}`;OP-adU3t4sM8W02rwtj97#9x zocirY_LbjKp6Byjsjhgx_CYPVGP1?J^zgE*;Q550{aWjdMlf08#$LIYXxnP$VbjGr ziLc1^2$xT?!)^Cdl8+X`sKFXa(CMLs1(P2CB4@Z$80UNF7Zmu?ZKr=x=qg6j_X|VH z6~7#JNJnRT8%q)sv%3B1;dFGg&Vrsv>=nE`r84?CNMo&$a(Bvw5+Zjw4piJ4?9uAq zUB$ZEhk`>sUy&Xx?GK{F4~Pv_+XdcMj1$HQvNCTjdb1NB5m)UZ=N}x(2FpZLBcsi8 zmc!?qYyyoK6;fo!;%{CJ=#4otnzde2uv|d957(v21!yw{+t(exn7A8`!W>;8LnoVk z(>oo(kbQ!`hk&9yP_e_SjE8-F@ew75_nj439;IcZ5f(^cHff2R?$&Yvi91JFZjAXw z6KccbS`{v$7|~N&8;x-=yU7nShCfXdbX&2jQA7^abR~Be@0{G#GB9K6l?Sk z#6rDrjX+uPY1WorI~n8a=D9$3c|6BP)LWEojS!O%E=na->L0C8~5ZU)ZLl~XDaM+jJLQ9K)IFgB1Y^J=T zzw~!ESG_08i8&idYa3vxPMShrq-LBuN{h>~A%`xUTxOPHHO(t+x5w+H%d*)?T8P34 z?6kQ@=7o*x zUZ2EP*oo*^idufJ_|-sEvYY?OKXNW?^M`r!j>!?6siw1Wx_eM4SIXp0*DhJ#`sEdo@Rz{Zl8}r z@v*SqPxpp)B1RGw{zk)XDEB8b*SB99z;2{A+A8}7hXLrxuo>}d@<*f+@LZy>A6e#- z2+Pso7n;oeVl*ELu39kAKSX4-8>cuG1r*i@68ZmiEB#qM zolI?xY|AlwjT@49dPcS_BlDHlbraj^V_zgEOF^|^b49J0{JvD;CJ2Y!M(1re4SeSN zvr1MX8#~#^u4-&82!8g|W_9vFTvjlL-bY1^$}&BTIkh_YSfLWiPS9EQ+sm?eyF}V> zF`tmGz9i3b=;&`NnWb}D>j0T8)_hcwFaBmC*}YuiTiBJq=6P8sw^ZNf6@1~P{*a>^ z=yW~52(0w$4`P6hBA5y+(;?X3y!+)7{Eqgr6)agk$C|RH|3LNng%5_RjRj>At~t%Sfto!%yU-a z;AuECJr;)q;Xk|0URiE)huP_TUPVk)>^!CnWv+!(=OGL;8OnHo)(sxS{MJI>d@{Pd(V~=I=aGLdeU>oD zZmXo|qGjZ?-8C+6?A1;6Qe%!Mm9MMK8Lc}jBf(*FHMTGbmiyjlsLX4}i5di>w* zshxJ;HtWQBFWiqUh1JsjI=`{faJPUeDHiM?;tbMlG@styuch%{Fqk7^R^d?(rM(vO z5nZ~n$oW_JPD897rx7o+Inj>bi0+Sz2IM8Zx=Ed|Ylcg@m~MR0tu>Zs!{zw7s0>B3 z+wIy3GvP4#k{M^mjB_aDW%nbzJ)_U!Jg+ieWru=#O-a#)cIJYi-8ygXmR9Bjr>aic zewv@iu|RJ)hg+W4?~0GK5R1NH%7%x~@%_S$%W?iHf<(;`y)DG5Q*n&@SR+L6jJd}S zB1v2P_br@#dz>?uP{fjN%uivH`Pjb!|7n*VN)@100M}Y9>wKRR{UIseLd4qhc$xo4$ zKe?yNk4vDkA9`Nj|0%|x@nvXicky^+p6T5gk9n-`SVKyAx6$~;+h-C9F7$%0H6)-g zLGHBGP)afwFR7bkwQ7bgg{`ygp=A-svBo|%b6*GHnwn+xz_1i_HrP6Lx`X&b%O~~0 z8%*pas^itqEd6By2H7W$7@EK6+LTbrw&1Jo$L?Q7$}%sv=192yRm+ zw>IiL0Tt)Uog-B|?g5Ta^4b1)U%T|G`)065;ngC#{s@JxOno2*eW<9T^I)p{if%=7 zqZ(CcO)?{=p+?dy_cRB27FBO?%N#ox6gY~B#FMg~pW!uInB&Wv_xqGDRyXQuSMCM!5+XJ!P zjasvem+VebERl9|BwQDd4iF187_|&u3BMCv7#*~ljgS-NG)0X?fOEdMD=k;lbN8e& z0D%?D$LhIh>NIBs9j$}Yk54j7OEoup{AT^jn`UzJqb9Vu=) zL-lRp*%@ax#fT_A_^qtOp%TXbmtb|fLWrpTzT!;k<`&T_(PZxz<9}WjhbL(LN^2U* z^yS?r^7KUPeBk1KcYbDkk%vtBk^fvY0m55refFU*@1`%=y4-yyR0jUUu;i6|QUHjP ziT^JZ0joUN=f=qE)d_fT>7ZKbd|uYjfpC0R$bWBR8+8scy$v1h9AQIO3GeLQdBJrft}ELyMu2YI z{-5R5|E>_93$cgb=B?g_&_hmC(*>SA-1*Nsv9otk9_@!X*Hc!jnVg`p)ZHhN+RrltEM?SdvlSW{c@5| z9e8TLefMx%GB7Ewh{*oxE2-hJ_wvA7OeQp4^!x{Ma)z(xTGSo!s)zV*Z1dqQtl~dI z-A1j{GIEC6S0ccvtQKTBSz|LNXeAp_rLTzT;i?1FJhY$Z7$tBvZ{>Z1$JBd76xs(S zS6+5EfRcpW+-VS3(}x5B$|@2j%Djsw`iQTq&EgR5ZMbAwis-m=XqQFF>~h`sCcaQ1 z-I_DD>6|r7T0(GWq11+_ZD}1uq1QUOyw2?NPabUl5*S!zFnh5E_njy2{fc|EOYI@JWA9yee5$iDQtf5#VVyI<~#kaWdG)bNMM^dW21?pc1yI z!bNBut#uq}tPgdQQw392(s<3Kd4eGgugu9Tc6qdHll#2C`g`N1d50@{h}L+A|JsG| zc1Wp~FA@a5+< z2Lj%^M1{?*cJ$J!Hw?#^4=6Z-l6POSoHo^bF)4t&bqWz|zF>10doxX~bm`fivJ_^I zt}44hNuCA)pEPKfq;_AKnJDOI^#)5+a5mTwq~hh4KbD^j`hQ}oq^xybo=<;Czf7IJ zw~N*+W-@`WHuoI}fg>k7)<{Jx%4Dd~^>o|y>Py(3n@yY6fwqiVSA3s*(BbKFdGo?L zMRR%lV8)k4NByaLeGBjogy#&79RlT%dxhxu^5sNF7O4CME3_C9%1>$$lLNUeAx)F?uaRbmmZ>YoixkS}rDwi^{dcUhAp$Y*2a7dbVw z@)tZkR=G1wO6#KeLPm`!)tqX-wrbBw513=Boi9_7AbPrwoy~Xq{S0sppKF865+v>w z+f>N|d{DooW;yxCb|{j)L?Z3ykU9m>`K;TS=oOOj)|xL9!Hj@pBZCaXi=(UU&gnnS z5WoovPTAcIlD@(nc-Ty*a7QG{V?nOan#pXBder`@B;(^t@6yry8-ladWIeJfJ1a6y z`hHHHU${oTqmIs10b0M3b!<+Iuo z8}27B)O)F~e*V659&LoRCN_pdVjKyrmHzDV7`W+_+Xk*No;x~7J|?%K(tfR+v-8eE z4YNxGEI_!I7pY@H`^M1Q@~LJ{#K9iQ+T7ZVPif@KYnVX;c|N0~Yd3BL46PQwjmvj# zG&4G*Iq;q?M)$yD+&wt?Fwwh+c%Gifb@=ru#Io;mqNR17GeV%b+L8YDV||U6Z}1C+ z5(v+gsnH&Af|VS6`DR*_QF7x9g&OksGB2byaU?|-kFq6L%Qc)k&c!NX;8hkFe@pIE zE}u9`&Kwv^0$RP!e<}PPWUpZ~AG=Jw7`9Tb$ya<0YT(yCBlyVZ9FVr~!Zaz(ilD9} z$vJ7tzaF_3aBtHnJ$Si;mh!cbzHhb60JGx6$Z{wgEXLVx+_^93-YQ~LZ^!3oOUKzP z#zRH|7m~Pdfiu(`C5!1!-rwVSD)Qsb`gRTH_upv@vUmJ?gnzKQ>0xU5XA?22(M(S# zhKIu}1ul>>J6gZ(!bu?jR5B0DB&G`83m#!=te4$=q*V^%#gQfu+tpikP;sb1L!Z&2 zW3C}U#O4>LVuMzXc!M5aKRI5~f85|w=6d6r8+~?vKGUD+e~rv21F&{K&{^TT?Q_um z?Foi#*(FItg@}Xu$fLiW(O>Z6%+Xx~LH9!Pp8NMSZaEpXQR7ywIhs+u4W}O2PJcL! zD8Ro>jRLUlfhD=(gh5c5Z%ec3B_(^K74?e1oS6MJ)nb>FC8tICq)a9leWRm8`E~50 zwqvji`5S};!)wNl7K0-UNwf?17BKR^~{)QzW5zlqr-cx#$S2U)vL|YK(Q7sM1 zuyA?Nd~Gz7*mx@*RE$ay;wa2cBy~E!(AUCNqx`vh9Nb1lvXI}P^vt6npmEeuszm*n zOF@8CR>%t~WTXG@J8FV6Kx2>v<7Mte;kjNo7R>RWrP<;au~ z)J&4)mDm|gavz~L{uOX%0-4O|X;a2x-%S>8ZuBoqNjIS11A{oJK=ePaZ zQWVX(u-tmw3)#xmbV9&Zm^Wqe}_d19KnF7?!RPKiQ>mC>iwo%u}Dpw&Or#cI5M8CJ9&Qz6QM$F7b?p-x;YU+ zpo$RC@(y-WQmgZQB47TII~FFZ#g4>RSM#UVn@cJJ#(1x<)#2-TlCE`ff4-FrVx7$- z8HW|$@`pHhozA7)G>y2)QRH~HMYXJlyGqYVh~F*kKb{;T;p9ut`Dh}>@V?Ripi$Gs z6cKiX``VrkEanj8FV})6Gug{85WGYuRL+hKVDorsM2-tcQWW}PlTPfc%kcOc`E~u2 zUJ6VGrP-3mWx5%hA}eOK9|AXM?-~~bt5>QwM%nrNed%T9AowfLr4qg%hy$K7zNWRF z-sFAL_s!yIjk`g2!~&%fQfHC|3g?}&I~k|3H?khd0yk?|G@zO^9jlzGm4~g-L-<0< z@QOHa6`dR;v&n_cmPG`aox zD%~z7h%j2hAAdhJ_}$P0(u%Zv2BR;`t_$j+)&E%(cEeOGEE!d)R$`4|PnFX2w%2Fv zC->hsAue66lOVclj=%*HLhdDJ3tfzNF$E6biy?c6?ThIk8Rt<>Yk6rfI&nrP)L`84 z!uxgSa1#Rlp+}4MM8%=`H7k+OegbRI?gJ&*lXP9~{BiA8={bW5o^aQ){4qtp21!#S z@br9%MoRcx88#=q;U73OcsTHc&gYAAFvmW=BF$ASWoXdn&}5Sjuj0gdKXQz@+R3>Q z|8Tklf#{`v|4i5@*;Y(<9oUX9E7@5f;hH8t1yNzm)r(`E9n`qefLlPn*5(~5Z0=P3 zV#lJ8lSZ)>T|=7iq;(A~lpB0u^xv?%A2p||Nv#|EH5h4uF}v+S$&C1Vf?9=D z1Ip)js>qh=93;GYmJCJs7BMRJy&)WsFY#3~%HNEYcDidR2Y`nDC<#P3?qef`c$dY@X9N9!?qvZm~z&4 za8;WgQphksvJ-MfK-P(6^L+fNi;wA%2F97P>RnuDX%9joBj)ijt73q5d}8q9=(E0s zrS*jC;Iz`X;uqD9;=cBpc=^1Nw)B-2 zKff!v@<2wYh`BR~S$nnO?`^yVODOM6Xq$!0N(-HE8ZKiI`23W%Ig)%FaJcXILsj`6 zq1U|cpNzTpTWlA3I%}?X238^j7;4LFNM)0vzh-cp@i`&L12=>2XxKwfTx0X{9qL7o zonU|Tp^<;`YRj^AnQs51hEv+uo4;kz|2oP}RaRhSqOWq#8_eLV0V2svq#4DGulO9_ z)ouRwz9)FuvU-n+f!ZQzU4eQZMT7Uj;!U^Q;R_FTFG+5kcn@$MNd&HbDH8UWGZT9D z9r>S~jsIOU;QHUx40gR`_&&(~Qs+Z92hU%M4v1!5Gk*R4ztAQAPusr!@2NXqp~qoC z6cU=fNFs>-fJ8)zJRRuq0nWAz(GTsJT`vc`emp;$=I%7DelGU>#Pj7{U|LDgo;jld zz@Vl79}HT7^73-G>_s*lMI^KS2>g|!sEnP0vP=JbajE5^EoaXy!(T!COjz?f@Q@E{ z@6}7c)=#7kenDN>`;Wsf3!W$w$xjlrEqo=w!_6LYls%TrR}&N&O*P?B-Nh%iG(9Hb zm~WXGbJ?BX4p z?<4I-HbR^oeh6`yu~Ge)n_q70pJ?CZLFQ*$|V7o?W~N<&nca6jSwz_7qdaEbiqlg1kP_1{FSQc12T_#oLqPoi6A8*PfMppU z{Z^&~3TZ$Y^(J%IlDb2$ENseGX_i%~u62h4o;tul#E$2;>R|Nxcv?5rob;GjM_L5P zuo6*5`s6&C-U)De8o|}E4e0v&O}^xik}erz=|lIy?Q7U#llhj+ex*Fgh}rD{K0l39 z>~xi60bEXT(zZRdY+JXaw7$Z-Mwu_Kvg_hTdXCmQf|d{(gZ8{mne2ZdcUb~ujGM~p zA9&|(o(?ezx|e_I*lfOvmYP)W=H=)Wz`ldtCaT2-8=iIK9~<;et<`!JCS&!f8%5mhM-QXhKe+g4AgijbbWhw&e|kqM`h~LW zT=lU%M}D=|yzLTs6gPXxier2H9@)T$!V&VKUXu!4XNq?MHeI4kNAixOW?-b{%e((HONI9A`Q3(M~6B5K-d z$5*nKPIAQh%ACUHQY7s@dKkd&dmJ?NUrt>wryGtk=Q;|ns+P^@EWicwH=BfI@*NjE zZeWr$OGdBcUn~&E4T>7y2Ddi&uCiiyK7HO84?(3mhePS+3w%`QOaYV-4$_c9jlJ*Z zgXgi`9?BE+t>gRi(!F`k7vCREBq5H+GdII2@-Rl0tW_zs)9K^???X{J>u46Aj z=6uk@Z>5sp9NOX#yk0{4{2Q2>1dn@-jGGrd{N=XCl=n3VhaRP#w|35X5UdBmrV`X< zQPLS>&H;W(G4P%f6LBUXk;1oBhy6xVYmt^Lg0V+7u>IP;az;WuEB$XoD1k@4A)u-Uy1>a?`EwWVY z@utoXUZaSV{^QnWoKNoaK{sxx%Q#xs3qubbxV`zUhP#ZKp7sF*4^}y?O^78L?Hpwm zhb&ZsaP$;hqJT!YE9=c0`{}@Ma+L4?%k5-IFFfn( zGexJkGJ9g>1aUbfN7qcH4-wwI8Zccv0psAB*}an}9N}iX183c@#$7qfB z?#LO3o0nhzysARA8okv*urZ$l^lP-3=Y6{o*{;uk0@V4z$HV%e!3?YjxV% zxT)o(t_gYU@t-P2U*1`WYr7}|$9M8Q#{Q<}TC)?E2ZN}6mp;s;J$$6`dddr6b$ZRP zLWu{FGBzSHpyGK8EwpjCN|n3LG?&);8*kV4$QXn0$vc>CF7_z#`wRT_ns}mhZPa%U z5RTj=Fn8_aM2+yG{*-A;k(R$Q>-g7g7U5EkzE%t8UfF0Q#Fpx z(km=y+rtEgsAngxHyVd$J?#Kz?|G? z!*Oic35DC$Ffqod+!YIE*NT!uU8;kl?Wn~S*!o#*h>Q@9N^}XEd;X8whV_cov?y=S zwR^Vfd87n)Bv;Qow|ynzw&7s?Xg;VoFE<%S{_8M~ti6f`vT|gNj`p@buSo`I*1FtZ z6sK;bGZckn-&38w$=G892%y%)440#s_${06TSV%r(#pq*rWNP&!C6$@~>xqvRt!F)pFb;#^@B_w04IM zqo87{r_TYCb4R2f__#cKgm;64Rg8&q$i9gg)O__03FIQ4h!RlSk2)h(`x={9(!Qhd zs3hP^vpBKSsx~Fmg>5wfMRvirR$mK*&$+o}N%_jyRRubs(!e{2w7ZhbAd%lLB7MW~ zAtg5ZQ&5o9Xh}r2^j}Ig;<&&5xDX&V!AspRx?+7d-f;08KaTTKBc}-GKxuTb``7yb z`|AU}dtam6#{16&_?fYj{XQ1}pFrREWSIL4PM{Q}e#RC9GZxB1OG^h+&Muj?Hamt_ono_r-|pZe*9=m{z0zV_T3|< zY7`~T=SiSXeOD63Rg$PA33*u`7Vgqr z^$Efx%R*RQU__x@dB6N;{PWVBym7DvHiw-AoX_i$WMtipW#2ve`>|QBNAN;tEd?jT zs(4)GGSYekKAPQM!XXMJ?iIYwbr^Y|{G@0Ko&u8D6gFJyrlM93<0^}>>Fz7~qSCTY ztJtk1IUr^-HRi0Ow)<$9(PC8O__TwJo9Wt)1aAKHb`ekhgi=qH2`0&1V5-omN^`q! z&Q!^;e`jj{KxI(UF>%QW7a6V|YZJfK8ewUL}|;bO_ZE$m@IgEGduOyX4u`d0Q*y9DofKcr96<;v56p_49=nn~m7 z5qS!ntoncaB~^P#{2Mg|oGIIV9eJ{Q`sshZTG}}WCX$Y^D2o>eyYK1g7;Bek-3$3| D=b8Cu@`t?7d-mS%UVW|K+B;N9K??5y`2#dGG&~t;31u`i^iwo6NDuaX z@W!aTz770`?x-yF7OiBEauW@W7EMM%Ow~ zI5bM{iQ#yZvxPly33#lubNms_lXFUsaKF^=fwI~+!uLEl&v9@zXAT$b9C$ch_MXgF z!uJFfGZ^8Hrp@QxW~V~04hJIcGec?7p)YXIb*EmTQ(_q~`dPOZZGOvp&K(-c9f;W$ z;73BDJ}M2R^$%e-2)S?*AOE2KlQS}U;!UUKaIbk?$fXmJm;4xI4$X+%G#_;T3tHzq zsYhcXrgSR{B&a|UV_5!yhmB7!%ic}dE#>wCxti^|OSwRakidEA3Kq)=_h&r`pNwld z13JkCmZL_#HXJ1c1LC(eHBNG@sCdrJS@-3j#ke}v4DZda)xvP8>L>4Fd2^*K;%Bgx z5CdyOv*HjUxmK-lE9>clwSc%)`7jS}k~bB%jvxO;u_?%+OhX!RNBZoAJpU1SEtXki z5NFkS4n}Cipy9Yli%uQ0qj6o8tc~q?R+jB=ubSk0X!bA8IDg7?Fh@36kmlpcgiMV_ z@G>j?ZDRiOkRbxCQ>cyd&svOM3id{Wa5pt33(@`!(eZt9*FR+)rFm(K&z}FNHwig7 z(b*huR9KL6h1lTmpoe*ug;okfL+l|9C9?KuN23V^qzPMlioUs7Q#li>_y>5{{tTDQ zf9Er2-bB;W5H_-N_XFzM`wh91F#F$&PC09(a`er7Ezow=@I77O`M4NbZDTiWuvr!Jy>LXzZm?b=?v6Aln&Jr#jcmQ#yT)m zP~kzd>JGbrvh0Td!T6ju2GB)Cnh8E*4@of*Pb-pDcA47lRrwpZfua6b5c24BsoYea zk?;gFEvd>Io?|^?Q~Ib!cz-#;=~Ihfy>51VPv4vKddd<4PTtYkJRo)VU?7w_$yU0Z z3I*?earm41SNT4ZL^8GsyF+qZ0Wt7=@{)JZ?<}{!=cx>=gBXO|w8DoZAVgw*NNOV6 zPYlwu`{dDt-qP(123?=_88z-p!s2VfT73nJ#j9g29(nF@==R|n zz6CRxqjFLh3ExAyW{;oC-V~kvZ7@6KZpISKlF;p#$7S6(Tk=mXn(uaP>Ua`bEf-N>IMek& z=lpL*p5s=J-i=ex-`IAuPre?^P;5`Y*)mr;3Q>qZ3+>_V;{=a|54`IQ2pf)_4ysn< z%%-~oTBK=o;X^_B$-6PF6#V-US{OK=U!yCo&Hc!34uKz4iCos>_VKDeHQlr^j(yE@ zkIcrn(P%?`h^n=Zw_2P3yHq3YfaAhE+h}8tHRe;4=^^$m9eO;GeSfXUsrx}Rr?;9{ z1lwWbpp6l(DqOj9+I$7YSn3j_5 z7GqS7SCYx#I{3YwY$%k(GQoDb-|fQiB*SDz$YuW68?S^BjuX{Exaa5O6#4glNL%IZ zVRq;X4D^+lYh3QH*nKCfyprZRO{+qG=Z~rAPVS}sCWf`Zoca_w z__M!Qn=%MY^d6MSF%s4QW*Kh3Ay)v52b*X?r3kspPO8=E++VjDn#XPb=fJ%)HdT|r z_;3miU$rphdW9YaPiYdLlcxQQN%`9;BV+FFdn%F3g_Z5F80c^lq<9D#wS-7ar~@60z_EFCy34 zXZy7A`N%=>PlA4ZMS0A?3srlXQ{sG%@+;YP+&LUdL9wei)^MeDQVB{hJMmIlov?ug z^m<_Q=ZjtdCt+4b`?w`?|^yQP@8$mwBkWBs7Y1`U`b7@j;E8N zH8}~N=Cv`j<~V}8aCaX^NAQ%R&-qNw_)ycYbm6tJ=K)>dZ3;K1#lg%3Ur#|>bE|NV zY=7?>yqd1)M@;9+(#zo-d=k~263kN1bd)hzAC4y|(M6^BB6X1jSl^#JZv7#Z^ZjPD}0z$5?QO?ITy^&N{ z+r@Q& z7DB_yXHaY`$O0wFN>m-(kuRdwi(FuP{(p{I@CcYnxeLkc8Cg_}22eAvQzJXqsc zdYujAN>GiZDT0$6;TIy{XPaRwux)B9lOsPVJgbG+84@?#`HZ7!2CkY>WH zy34oGQ*ol7q4pB}oadFu`0b2&$0(KoYiq4rwV1A3+ig`RW*Clx#mACu+nt%1Nwo1 zo7@>MS_ggMQm9S)+j(86%+bO{@7UgTZge^dhSJ-M_R2V!jiwm4Ud<(*la4@NpSfN} zOC(YTt~&;4w#sN=3&HnK`T<*?uO*&y)qpn%4#EPoJyMgicpc8W-uDnypz;QHjau67tZ{Uc#{#mWD%@$jQr)}7_07qc62U>p;@ z>0O*=Y8~7;O8e~HkY}M~TAOqd>n8w+MkyIlgPhNe!obYb=@o}n)5dpkD3fLkx|6BP zBb&oyFyHu7qaS8YydNjJiz#Cl+M8D%UwF>L!4LY3TKG5;=;->8o&L|kV8*}uHaM6- zV6+7la?lV1NHga2kE$|*g@IlB!K3uB;bXJfuh?E!b{4FthNt&18af%eAUzjHrAbNI zDsWB2>%7MwYs(Dm%JEKH)Iz*}yjhd84IX_!C3$I6cT+D!J!XoQ-reISmJAd$IHld3 z11H(vuNK`UGW0TtN-fz*L00E{oSJE4CCV;}Y>#BQQ)YL+2TM}3CHH~UkxXmdiuO`e z{XI=SHoRLO1D2Y-!gtP*W)e?C*U327@D+hP_Kzrvnf+skUkrNFZ1&>cEnM4@uzbwA zTW+j@M^en%&D)ZYo%6msN6TV5PG5Iw@=OyirP6g}#q-V5=ea{c(^j-M(Xj^sFPMO3 zsC185-^CCq%qBQXPqn4(Z>!p>?wPT}g!A|zo6rIGi8D7Ct3+M&Myg*dM(T1|AW))7 z1uq{v@~G6fhq_qoMHk>;sC+LE=kt-W4@JWS?59N%Lq?DqK`vX*@DN3Y`}_Z8pHxB7(q5e zd4BKp^rzyky}4)GkgfYJ(Nb#FB{#mFBR)sc@qP`Dxb5HtGkFlO6VKVEbff7@pm8xZ;A z)zk!_Uqu<&x{QQDn3U))uKmx?w(e5$pb8w^tcOhAlT?;KU`n2T$MK?=MEl&W97^(L zXu4TF6$pDezSZ7#oEmtGV+%n9aZO@D3kWctKw< zfN(|D^fpzR7IK&TLtDAUmlArgSVOah#<*<^W44T6Ga9@Bes8S)^Zi&foKGh7Y*ri< z(gC0p5&*CF>JFZPG762T=Z^$0mF5d?js~oP2PCo(B1I7B4meCPWM~`jlBPcWs>w7U z@3o_c#q|CKK(V4fPE{`ZfXx1sL5@Tc!nh>hUnCbX?r$-+ZuE8UM>!@mL;>(-<48bT z9+C$p@D^K{Yd*O%n%GM7XE+iVfK+a`6dPP_@Q{U5=>;4>H29WKTW3GxFfDqRfE_M) zrE({nKUCbza=^{CkHffAG3kL=gp`H)T<*n#hvX_PQk)n+S%_9jPnDgKBjTt&d5P=Q zE0DR{Yi69Hf#1&WBIq8&M{^&@PJQa4R836C0mQs;rW|1Qm%gK3b*Mu3H=v$?*=F*N`Z&L|RgM(547jXn zzbGW~t}a=-D`Fo53H?Nf8Ml1}pOK@Is+6Hs2D@RHsI=C20UiQnQk{Wd>cCR2p&?|_ ztxRBotOQttkx&kN(qPn7Hx}fA)IT#!;L^1JWy0Kl_eF;eO->BuCog+y#jzH5|9>Y! z6oO6Zry;D$B-;+31!SxaN>p)=kZ3+FSd|Ao-;1=99P#DC<_aLeEu6%=eP#=z_c*a<#l^wm_h$KL$v zYcp69F>+_m*%>txPYF2qzY+y$+E&bng-bzxoIyI2@(=vi5{v2L|Il5~!lU`XN+FJ{ z6cUhSDOxt-c_#z^qXjA?N0a-?##GV17148M|KU3t6Gg08fF5A;dF;J;A@5QUzZjM6 z@s=KhLQuKSztZcM2a=|RL@mH`-S7+ub^SCer+a{_iuR(Z{K81cgZW7-&3i_h+3yQ5 zEY;~J3gR-Yp8$859kB=K{LPsSz-PA+?)Gxri@4TsMevL8^25FGS6p02`?l!C(Grl& zVzwCV7{X9_|jr%Kd&=Y*n4->%v>>--iF~tvz##G3{0TMI3tEx$C zb7IpR*M-0qL?I}k!|po**zR*}-csf_w(p4AUebpW`J*~;e#FTBEuTb~t%1y3r$3c@l zF>8jrC=C4DANAF5)zJ3|eV)pjJ39E3fdPX-Dza^2N+-NzzJbahOv{^sU2jC0pi}6h zAQk%V_d2YhPA=RE0atip!&B4P5L!8bz4ow%F2rVdYB5z)qh=!okKpSUG1t%IJ59K~GB zMKt4)RGJqCBdO&V@DFR05rONX8W7sV2)>J_Mf$Z0WXIcfY{NgSrruzPTrb}4BVH-4 zoK{=WIWA&g;k9+Jm3EMsWjKp6@Yo_kq)Z1o;xc{OGK4&7+pmN?PC8l!g$Sl}T&awV zj8tz>=>pkRRY!hS+BMdpk=hjm$&RI*D2LxzdriSsB@N2Do;UYMk(=uoD!!+v1^HtS z&PH=weAFUMw9p%hgvXDQ?ax>1SgacU#Ajwkn`Qbu(Dyw~ApaSow@)O6+g&}SAKi{G z(m0r5RoeVuujR7j;(|m2^W+b2_oULeEg4;QXTSXY z`{}CO5Cge#E6j0{lAqDmq%}5PZInMfrv!xu;cgJPyICwY3ni9*jQso=BRc)@Vh@I& zTuPeTKN;s8?_-;N)Z59w4?)P`JaV&9x{F7_a|vi6Ps8e@2|i%*drCz?VKUJoZzu?a z_C*7`6q>GGQ5efKX2K4@N-{7Ijz#dE7`@mvqLo3lj{P;b7PE1TZ zo!0gvSnyR0&HP@Tq!S=TQyN*M4>yg3T5oUSps?2A1LaV0zto9>qc5kO-UMC9GHZlhaN}r^s~()?SN# z0-eRec4@9jpE}>xMlQup{dPkX5^2P)bwzAjhkEmfdM7>+l#T*^#->IKJ z2JE$b zb2!htXt0wUKE5dAanx~-R5yf-K)B#WWP7LbU^K18^N=MfPMdm*?kzPlV{f3p;ET}R ze}uFI7?D3%sC{t>7F;tuE(bkPzwzR2oi5k*Cu?mrS|jpkK{WV6VUCHWRT__D;AKp& z2(U44q^(4C$TpkfdbV>a&!0MhJcZ6a&ZdGJ5<`^tu-KLzSrGMfX(^P0Y&6kuPi9It zxOUM)1?Q$}^Yj_|*Q^&rZe(N|Be~t^vQvl@N4qek{K51C-|(b@RajhFr}s#-$Tbx= z_EIi^aI2bzoa*@?nGpG{_jy7?&Z6fjm5%H2#MW))5I+`YGyTwy>fRA%GlAJ^tLR(p z#wRwjHN~Fpf6KdgVGln>fC-{lx3`` zbtAksO*G71JZQTqDLdrb0b@8=2&O^CM+>(-zOBFVbLsuIV;m(!VwU;pWAT`aLUV}c z-a^yM_3A0Q7OGx4RK<6NoW1n;2+9{#n4*JqKfvlZCF#>m;ht&bAHd#jr>6&xSq z%@HZ;E;$BKR1_T5_C1vu?(OTgtzFAz@J_Y5J z2Bm%~SkkgH(fe@I53wt(ti{8>PR00KIzlk0-+t!^u5;ZR!I1UUbrC&Iilm7i61h}u zs&Sg(a@t$S{p}Klw%5}qvLv)V@-Ri<*8SX0T2@woHX5^|H>nhJwzsNl{#35r&&PYB zJz6fX-|H-vnXLJ8aIWs=o0Tf{dbam^`1c|fYEM;p)}BY94DQ==OY`1Ey)+;5+h5O{+mJ~?{k3^I^f@9Tf*&M8#{@Qsqm2Gt0VSUl zuB%iO)IY4oeR9(cnxWeKO03ZOCxLC;z5#P)B6`WTK$+-dulbrJoP=FTwwA~4<`>7@ z(Iu)|Ehwa(0fi^o$2+MMT-W#XvqZT-K5H!^!rjp_crJCTKHR+Y;ySuP*2LLQqPtJQ zBg*L?YyMI-@8U%;nK1_+&EimIY0&mw!zHy`Gc4}U;Y^Y~6=CRN%ymWAby;?`0LGuP z+ka}mR0%U29iq_rmOF?K)�Yt`;s5X;Sv@((C9Jr^_@)7lt+N6OZd1)Ze)+vG{pb z#V|+SOt||G!4rg#{nwUCp2r_{DZM#<+b{aGWZ5=<#N3LU(oNUV_KorF$LkD9B+hQzgBzx zBYu3if@j^`+g&=w?~Rti)4m7uGNd<`g6ukFq9zFom)2VAwz#XZO=jsn-#*6DZk`D2 zbwtM7sr!0vYr9$-Q6Ju^ayKbG%2!e8TX%_p_Nci5Rq$%ydRQK&DE;z~;I3-*C)v?5 zaRRjuD_kuc0&VHuU7K6SM@LSAe2&a-Ty{c=fagPI`Y4wQ1ax*q)?+XF97cNVCbFu$ zbV2YvHW9Hyr8O9o!}Cp+8z^j*+|#Dw`o8)VIks%BX+!8yT;J@+mxCvs+r;xb;@DqW zjlHX`ZIbBg8*M3VNiC({W#Z>VzdF4~x)dkdZG5CV#JopVuVp5W^ z=%Vn}a#yT`pJd3q3e(g<*V-knu`TR{BwEq?bZ*b@&5MkHLajS=(=6P*L z$@d#mgWUY^EC8to$5n9Ij}dL~We?4*!`d@-t|s}&lNnpANF^{n;PHv4OEG2`o3bnl z-rPBu+G9OaL6$otSg9boy)IOsNaJ-rP;0sJLr=}m@w-UXvBz9FWp0~8=5r-fwv9I)jmK*g z&!YRi`=G+>-0q`{jRzjP(!q@DkME=0v=g!agaj2;+C7!AUzpo#&VTowvX^wR!I=0k zIS{1#ls+*Fg%PFByutipwQVClWa5v$nA<03BB=#i>Y!-l9qw*=^n<=S7mm6^szD{| zgYNfyuWsO5k$okOqwxvt+LZ7?88R-ft2W1>LTazUD5`?Ol6O-ay7qr8@_%C6OWCRX z`-AVSVpo6fVRL3FWFOElwctg}mUv4-2uph7=;(NGRXIdiCO*HvEQJiBuq)DDA+4-=oVI7xBDr8qv6T_?Z5?vVEVL@*GpW4d zK0{s`Vm$~rX4`d*#{jzHKW4l81F!f&cE?XC0ds99c;w1U`!akJ$Ls9ovhLy!8-7`3 zwh)HgA|bgsz_DB%>w=M}AeyHrTTvsDF=EYnqkErzv^FD+oVKQAcEx<#BYCOHCQ3Gw zu)9M3dlrB~3`717QotwufSBLM=$yxs?tgn%oB7S6k62a*4gA|GdpX_?4@aWzzX*d< z#3X;YtM34)QjY~bR2Kl=GnxVv0{}j73l#W6Q38rO^iZw4k`gTZ&C?HTitRs}x+@@o zsjpFkAH%U>-HZ*aa-_M`(XEYw|?6uRzSF@5QyCC#R3CsHI6ZBVjfe zM6{*J6OP#y;g>tb(d29fbKT)#&Dl#1CaBCC(Sc$M1GHs7rSctE2LngC^qS9DT5kbH z;(D$hw8qpMBSZ|45DH9sHN>{BR-J#qojl%3h<50B>_oNJUjU@TH6?g`*eisx!?HaZ zfLm8y@4{thM{&kUPo?cR=cFJ$ckNFBWfExsk1?P}HJ)P+M<>-n4A7O9y1BGe2|>*t zh?b&uckDp|$e5@EgUc6x$rVi{P4BvE8n-~m@9oO5F}fE{7M|Z9=EBQ|tzXa*>t9%4 z`N0tJt)AWIm+!;iEuj^HK+gK0bZKJY!)bkj)*ea!4WkF6jHp!(ZSP6xBF54SG_(4G z_tf_lzSnu~#VF~@zz80@Ps&0Vho1m_J3W0Tg7ypT`?TlY546Gi_TjoaEs$Z-)*tUi zDtuc#(e%*v(fl0IKcP+IiGIFSL!*H_!81s@Mqhr0MuT1lX;nqzdO+Lce(p<*wprg- z800%sx{EIQ?&0?m?*-m38W*5*hI%NZ2R=uL4P?7mVEZZ&Z;-b``-JdpmJp$ecF;xX z;cHx{7qoA}Y9#E}hbCx>JEcLak7(=_VdRo4P7T8LY5 z2IlNf-^!5c&EV<(GJ?a1P0TSp0;z{ii|&9ERM2<1yCc3cgvQq^++lmKY;wc9mgn=l z`=D;XvCoS&)V8*!NEV7{C0i`$Fy*Z%(0|BAQ5m2w)9}CO~+VqDfR@4pb zKe+bRyy<^~-0acrFmY_?J{UGH<=2=7khwj~zabhkV#V-9SyON@feEbi0cp+?kJ)wy zYe%F>stm{gC#ujg{0tgm3NTCa6X7`&DIlW7;1eB?>wgja$LV0|$fj$}H_Uzje%qSv z4cC-`;*a1V?}#)shF;U-oM7G-hDxKtsj}2wV#s zH$6b3PX+8hbykRbaOcj0tWOhg#|V90|2B_a6%8c+f55>@palRY*mq+En%fFA7b7U~ zp45Na0*wCq00Sft$s92vzH0&Wka2^^krjp3na+!Wa|ZMXg(0Z-9*E=x*pgqi$?uUn z-NMr|rI4Y$!nQ8Q>WqTKQ2FGI89}jyfUn!|7JUeO*bU&vAUpum3lMjTPR+@L4#fs; zz_WF@V#yr5i@wiU8OccK1K6>={_3MOJ>V5Fe_$SPQ{!MChk!VmJ1g_+mX&Fx!K4o0 z!20otxmwIubNH7vZ_rsT!k)vJe=P^=UCwJvd8a(n;t%G(-Z{%bT)yshy{FQ265vKP zkfqlKb{FT>c}7H##`yVShZNLqMaU#D^$0pal|Q=N`{<0(5OJh<&d$_`^N4I4^Kb1{ zkq6Q0Ih&HETF}c{aqTAy#kt4+%!EIY&8 z&!ntPX9R<97Y3_$eoNK;^0uVzoM-WBOvV&s*(W^eKQeT)zT*1s=JUFg7TuupaQfQU z)wuin{4CvHF$mrrhni{qX^E53bz3ZDcdy%Hh!tFas{u&WVn12=K1L| zsrVHgI??IR3P&*HOw?e+=BW-Bv7$X{9xH^%i?vfIAiXayCE5?Sm&OU7`pv2F_mrVG zA0H`TGS=Dt9&qy=)!V(YuSl)A9}EPM$N$IrD*-$Dq)hTiZz9D7@TWwY4)py*>lu<3x&NWP)~BMXaY__!T} zcmd;vOD1PcpX~E|e0cTA&9nGb34DL)YiY-H&%cOVqKP=AQ20Qe@m$T@tFslC{>)oa zm)&PyO17otZT!a+_acp>Hp5PKQyG|zGGiq|ta($|bCh->$xGWHfC z?3%iyBn{*FhWRsOc%)`M7s2JJ20aG{(Lidgf61KE$B%wnTMpcI)X_u?4amRoo;ljJ zk2*R#&x!}wpS}PR^j{`t{h|Cn`yEo6gxHC7d-K{sKV)lH`Z#$FFHh)IHMT>d72j=7 zrjO)E3UqXIunX=VXbwHwGq!Ldiifz=X+bKAyOI7F0#8TGxLANm@?RT~`~qg{ zp7T`;^0FIBGxOzaEw$xq8R?_1=pkat%Gyj@1K+AYcS14b#|$cG#_i70 zosMsBu021fe$DkbBnijUh1VzxoijvjMAX<2882aUOYNnsCFN@5}##3@)}qF!-qnTO(rG* z!}1P#rXLI+tIVUPr)RSoW(6-lXeI~7#p#yK6b_f@=8#v=@K*@DakdPt;|rIvF`XbF)SNg4s&)Y<{g+b-F8?^FWP<3`C24np6L+tiSMO?aG> zWdPqwn6sXEmaGtWgfKQjCbZ^%|CFUw`N{la1oZsRAFiF5_+n~IKTD_GZvlOrG=)LC z^P8P5*C%sNUYYiz`-DbEOWi>|Fr);+z8$Zt$NyHB*Ws@;E*TeQw8%|eUz_reF(@%o`ycx&SRzHe21$^`tLkE-YAG%J0-1p z#!Z$UYEK3`M%*b5C4GpzXQm)XxmM%-{&+q%kv+11a@Sc;> z$L)S$p|#ucm(_xTf-c{iSC2L~dx2baElYfkx4d>1^?!8e1fb&LRbs!6lYjXl5EG~T zg{_owt-Nk;w))}jY_&kE1xtn-{mJfJu)fbZlUF*KQ6DRc&c(0ySAJb8LX4ZU0vl5q z;fj)byFIBp@xs%>g6`kH8wn2&%C0D?&tiefBaQBq>?98XP9ANRaei-g_4WHA&m(g@ zEi8RCwZKK+#ZZ1n^;XO3Z0}1wX*oGDXXkq^D1=|)^S*0lRIXHX*I>Fpx!Zve!e!28 zy38=}+cQu%?ZGa0Yky7MoVMI9H(#RKX?(BzCyTWiUcDk7%aa$kwr2M@+VDSE>1JTt z-eVXr8~FAmKhuYNW@d)(=4@pS?k9NUtM_J5HR z6r`lVd(D^lq`FQ-)NV`Y{mCye;I4aJOTPmr_HVO;sSE$aUwl$UrkG`j>FH5loa~0f zQ4J5*GdxQ?TFOs6Jn~f#LQEvO=d7raRl6FV4onCqYF^b|$KCy<{z>>l;!aCs=!>dfe_=jM z`x=k#;GS}+95MI!g0T06o0pj~Lg*cd8mST@_s_hwbpZ~hB0@GM6@DL!1_9B~d5N1Y zkUwtoa;~Zl^lkEePkz-+nQ`P|jb5g|GQ($VRp>y4vScW=AB_qmI-M;C|DrxgwQJUL ztq-X>TKK`s?!J1x+LvU!F`T_+Xwd41u32kKCGj-9Q`h5i)Jng~oT$NRGw09cTB@D> zv|;d1BJ}mfk}ch8%f||DHW}G<8y-nf!ahISUWpdzYPmV){F<0p_~D*=jQdV`H)s_l zP0a{fj4~E!mP16Y_QeBtkJp+(`6B+@qXa~f=+YTwVnS>)nB77nE-?#!r(v+OgmuyO zs(qCC`H=7>Zt=&7C`v|V<{Y!3j4fX&DD3ucyyE_Prtf4^;@7XAgK2y;kN2r$-k~+P z?nQlhNLrY2__CWakRe)7c@@4AB`_Y-98$t)_Ish+IC#q@e4oalhmLRctI7*o1Rtf> z{_oqK?ZO4nPh~=J{TOcc-c3Y}rY22&Qf;%WE_BeYGWSnTZgD;TRa7K#xHdp{fQ45* zm8u<&MB!C$)CVD{g_=!e7EET#)nn8CQu?5ltj}HtLa_nV}O79 z<*DV-a=PHThmLG^KQuiy*dJ3ASuV(pQf@ztGuxgl-f||hY+3tebwfA(@2#_QH97U| zzdp{1s@BBk+O!|8%re~9&&}n3a9fU^Fv)`N-9-Kyv6cIJ$E1TH6u=$tF3?$#L*Hoj0vsjbLx5kd*&wB`k8tIN6Ng|?!PBdSCzUiW{8=qhh-%iqXNg+>6 z+PKWgp|3{Rto5hVttM*6q^5=gRQ*aEVPdP}fgomV zuipXI)$#KgLMG-Dg=ByFe$is}F*0}k%+*)q%N9^phN@M0iK9Dzt07295w}OcbSGYy z#Bm~Tj;Hj0E%6cAN{P-VXz2aai+A3PVod$(b#uP{-u?#*N_sHhdR#^EXR8pALyDO` z+~qk(A( z3p!b|-{MAbzkFFm`*_9halw?Wp_Uj`}7^ya6vVung1+T5Dkkh7nAEJR_ zL*bv-@q`B7Zq=BoQmVu#FC%JHC8_7i{c!E)Letmd%-bs|oMzmFDg6gis!ga6PkH%B zsA~#O%teFq_I;cJM>#TZnEs~R$R7`64u>@$tA&DuO+!7gU=oikF0670^~+;}zP}nB z_HA}VK|;T#gC7VEj?$6Yo#bvBJXH z%G?fIyl$^etMQ7ln|3%C8=OfQSzMLiQ=44E;4?xJm-JzxFFNPZ z74Rc|{`{#P-1>-gPr_o8T9f^C7eC5j!PC=O(E4pa(LmHMV(0E-05dKGfIkA(h9q}5=j`&&pS~gz&MV_-3%|-n@h^~>05r8F2G`ps1vgc ze&ae>FO`QiF!n4A?(SvyP8QrdIC@|crU~&Ubmh-b!xR>PqlJ_1wGN4_8=RsXI=cIU zo>{!#F{zs`KMusL>VD1W9XqsEv$b}nh40C_a zs}a>nJNkFMPS5WoIhodHVO`-=Bp!^qN?^T~d} z^81fmIjP8*lFknpET0+atlK_w9e#r4FB$^R-T6jN=1}Ut^S1U=uM5+Wb}cG!K;4rA zezdlNjH891*il?ldRq1R1Kp&g>$!!6l)b&>8;dPM@1k~s0`V=w|Ng~%4CAy~xlZ|n z2QPY+)HKbam{H^sIoIqBf1@U^U~{>V8$Evib++8$`cQD1h`x?a(GIt4PF-pz@mf`T zOQz~!RV>n$T7MYI0`Y(3D)pL6k%OL7V7T{uC!EBleYP6@K|L?m{jf$_&iYGR8|CZu zg$k2~t?a&G_<;JC@o@=C7tQ-oLN&j_3QKi~_ZBl64)N&n%Vgrb7gr4)OT{%Ut@dc} z@9E-u!FE=t!%}<8VR)TFkB+qR{}*JnbkE-)1YW6CfsmgpSzmp;5=>Jwfj85qw;jhA zqmbBryb&AlA-BPBe~G)wHa(=HL$jl!PWXJ_A*ORu500%vVJcKI}^f}vK ze)7`jbD@etxFhAh!(HAOzxip=qnB-UFhfW?or~piKp>7%g{^z=4@nJ|4J`pp+~2el zDH`ubz>=PFa%$u+$}}kl;eta!rGqTX=WPS9=boUW`J~=NBB#zZJ_^kb6}!z-!>)sa zsg0mYv$S;jn$X+Dv-*C?_xd%~moFw83r(-zMQj973w`_*X#s_PPohpM{nAb2tAkUX zRy8@TbCM+%NGsxS79IlcA6WD{97uj`MdGrzP#n}J>9vY3s*p*+r^GYEFP~1Rp{|~H z0!yN<4Jk`w0I_1uu4Q?~EK^OY=Ff4(9z1P~<8Z6ub+O=+o|7sEpT;x+PHjY(o2)li z0PWcQKr8}vmOs7X{%OxS=lb3Y{52q(@lo&4rGEQ~#>+kC&2~}cl~12C;yQ{u=K=0u zK3`qWQ`GO(fmuOX!e=noWKDrj`o%sH`%HWstgJj);(L+Hu_|NVYuSYMyW%h>JG;8x z;yV$+lCEbZb*O}n2}G_4jk}}N+Ssrv1#hnoV{#28dzgBOilAAiyS0mj;yMxtlQaFb z5vt94aAaq_vClJ#`%@30vvj1O+&E2QCYQ;B1^0C->)EQGwwa#%R(eq$2~(?Yh+O%j zPEhc@lO@zXJKNp)y^+M!|6B`gL`eHww9ub-4)RcJZf%*n*OH+XKZ# zxyRlQl3)!nH?E!QVQ~Vnjsts3gwqoeK*BP-odX{+{(nH{BVJz*J{_g=A7aJMjz>11pma3tb!*&B**LDu} zs_eq|8nVM5tHZ?JzTK6oEi1d9kWigE(9zjzr3Napw{RDkvKgnHSv7FJ&GPXR7OOF) zo|1--QrnXQZWvz_(tH$NajI`p0r|YQm1PTjZHki2z45tidM8B7$TWfPJr&j%6%arU-a;;p zLZ!SKJ@ubM5zLNA5VSGk;&1irIIa4+BPsGFG(5(*rk*1?+EHBWoJdxgwLz0)ouE|{ zeEop!8r%88fKA3;j5bhpmjq2o)@_Ghg0gG4+DfCkYvCm)hvQ!h3k%By-9Nut9w`LP znh5KGseeS{aQjZd*3(&iw|?d;BijESKIt`WIK{!`gn;CN(^oQ~8e9tsS=%n%Yd_gl zla{M8=ub}E7^b_Q>GKgF9?#UGn^jnFK35veR77sco%F^>t0Hg4RYZ&r*P8hL#yB~l z5bx8`YOSVxKs;AVAZmym`J5#`%hdJl~Y^Xihwd;!_vvy%wu?J$0XTaX ziCuqGmoJ~1M-7!|T=YKGeay&0fD6s$oa*&z!LFTtcHDD*5arjfUD~48i7#E2A;?(1 zj*2=>VTr*+Tw>lIh1@2|!%jt15;wr51;SI`TLH3}88yx3M#|343ibH_u|UfgzBdo7 z=Nm`?kkvN8O9KvU58X>_))5Hq#led@}tZ7LazKLG1x3Y@UN*)Af zJIAgC#IPiJo43`5^YgKMd-;Mj24 z6-c$C1yFSuqv534z+Dh+fm&Qw9z#`K{TbaYD5(#Y0tpJuuTA5>1)%xvGk* zalZ{}y{Pv+u^Jpdm~C2imX6j@hPw>FE;gIWz3LzPUX?$j5@24LwIYx@Tsv?(nbDsx z zhO+;;ujR~a)NGeIuwv-0IIMyo_$~=M{fi21ZUhzJA-^`|TS#-*B@PSYa_!Jlp@#qcrrr`M?H(HT2H$L~eE1 zPz+Ko-8DAV$c1KaM;&`H-wDqocXzVpq@IiEX_B*TPAa(ToT`%k$Bz+jOkJi*ELBy} zt}nZ>LmK1l; z_^Qgye(t$N|D14T9MVr;SC~d~{unB-t)CblRL6Djy8D!x7|$n)QY4>;$D5yv-WyNpi_fkFV1@@x@(koZ z*I4PJ{8~Diq@wvI9B+}~(4BcJ75Lckj#lLH_U)ZA771MaC@j41FbH=X^4?oeuGUUx zjJ9Uh=TFsr+j##wE?v)=S`tn>z~;< zIoK3DJIYNb_*5SY{P>96W}?(B+3@i+BayjAYg`;%#Q?P8G5UBBve6yU zwCjlW>yikA%O)aMBF&jEgHP^hG&!vdzdCLZ|73M=Y}5Ut=!zdioc zN6@R=;-95T#B3xwEwg?aiO+Y}(IX0Wz6WlmyU;`u@PZzp{b92E?vJ|EDPGgE2&C93MyxAI(BI^Yi`4=} zJlFLf^%QV@X#RT?j(8>5;YJnhH#vaDTdPX~zYTg-;MCtKX>SiZI0v8NWAaPx{Sw^< z!&vZg?m=ISkutG`dNQHrD^gASR|}a{8UiA6a(O9YcXyZm-&dH2Y8!WcZvr=_#w87n zCrOF@CLCWbyO=w+cbSuua|4cMWt&PeD#@X2B6MM*^{ngxCg>gf1>-U@SkzgNAgZ2F?xrGvyUR z-)#DC1NG*^cl(fqBaVml{uM{NW93hW^!?ikxgwVBy?S(wEHUUmAS!W@QHyylXp!Nz z5c+?xP)+C}ElJ0H^rMo4h8 zb!b$&k%mJ^=RvyRck{fl-nG8(Uy;q;GqY#UTr+dsOmy6dpg=#vAkjm0jauAU?~%ms z2-A)c{U-ve6r6x6jcGq4z_!V~(TR#j=YUp{Bt4PC)hB5u%q)rhZxo zc3>5b*nUJ!mQ9pkMtwihON99vs0ktY&5@uX1~b6O;2tn~3`jf>XVj+&LE-RwDC=(G zzltFDxQx-XLFxcf6%j0S7();L6Yu&FWB^wt;B!=8bHxwJT$Ww^(y_}`4K$D?nhVn7 z(82>#R6VcLc9A(~&fqFB9bbOM7~}&g;|`WqZMfdjT?`YWlnA0Zd!khY>`O6C8lL85 zpbXbbhm7ZCFODEbOxs|v1%Ima#PqNfbJyVcGCX)A@CVB)Bsv6|I871hcbDIsx_O=H(Z_ryWjk4&D|L=BtnjmV%eq?SjaNHo$u4(s<3I=U|^D&{u8B82J5^=m9Ly*Id z^fly<7=cu9dk!hv4n$Gy?L`^!i0_rC(fd~5^XUSQ8Qsyw(0SoG`&OoM8?yq$^`fgt z3ilzX5*5ZrYb|<>x}Kx2Yf&eSA{*>@*hc;aCjtk^9~Dxd4n%xU3uA)MfCoC9&Wvv> z{phOOK@a4CCEc&q)j;ux`F2DkVTe#7Pb-1_mMTii@}IG7M;JBx%jW)u(UAeHy9NH> z%#X~!&j-lP56@$K>*Q)W`{R#TVE&;)fTRcz-&4cV7le5x;4;wHd8MIT2u{3#g)u6S zhsYf>GXen4SF8A7lqSEi>O5Sb+Ex-f?QZk~E~miO(I)fQ%6OU+nVX#d-NfBs{t4;& z;`~$4P5<8Mh*QU_dF3|J9@P7vJa3LpYGnS`ws1Z>r^kKmC022u=Dt=0Rl;>3uG_aS z5HEu-Tx3p6L2*sjU0jGx@jSg<(3`mBj~`P|kXahq>ja{O9 z{5Sc?AO)F@Y*fz{KegK&)sYP!($YSQ8(h-9m3VG_0<_RziLZUe8=*T03e6bynsu^3 z4hytBt^Q6w`85ve;>P-AfWB_9Detk7g3z#4ATgZw=}{6yKf$~iu$$X1LeHn8s?GfH zQ|Nd6f7xE!YoRE_!R_7%JY7ul@n*K+19RiGhl!p?Xk@-S_+7Mn37PFyMOLinR|9(vLW9C$QtX%O~ak zWsBWz&55DxBF|x5LTgEQ8h7ijiWQ6H6qw`6!x zTyDcr>J(m}+xzU~qam73H*aGgi?8XL82l#+x>Zd^s3%v%8b~VcssTaP{GD`4PP<~v zDN)wN!)t}3!J6BH(kS@Au;;o4bh@J*L2`h z$PL_u1Qp z_cr#qs+UZ^fadf)nwzP`PamJdxUh?CsX}} znJdncj59X8xsD`!&`Uf+Z=6PfiD(oB?oXM;Q%FbP*ugs-eX1v;5K__ihG3?>1^KnR z&wX5UAJy zoDJA8nQtEAd$}H*b8oEEYAzV?f%Gq4fQlKE0mYj=q{Un|J`ssD%Udw zo)7#W#4TTGeEH_jlcz4OZ^NQ8SfOY1Khm$NOt;AtqkGv8N-JmaTHAq$$~3HYoRn`g z7I|1Fy52-<2bz;?_s1!#t~Hjd50VIJpef2 z6YY?T0`(vwd8g+lL6cp3t#X2~F@ayg*eCZFS|lj$wFpPQ!1pHyxBa)zQ`*txPklnR z>J?;F>B-6FT;2n@!*R-gj@x;vTsJQm-d2>2NI74J5}hXn4xzs8^c;9-A30%8{-)-t z&_jS|Vwixepu*qJuD-xyu=mET6ohvEsL#IC55g^sT z5>=(75Yr{8-{8~8p>#_tch9C#mDhtq3eQ&L*Q{Wc=pL$1*p)5c z$?tM~hvT%mv0rKXX_;KeZShm-Z1dB!k-zM>Dc_|n z;h?1xM-{*L`6v6|w7q+a%2r4JcDtCn48Q!rgFbB)t+jnALt3Jy$dnD)Zpn_gA1PPJ zr58kkX@Fd$l4kBBoYkeen8F~5=|F3Lk-F8b;iS7DwX_48ARofyrQIzEp$=Hm{2Y#B zChqeiJq|ql!M$YhfkWBUm*-{)B!fAz8{6}bbsR;LZ(-!=8@XZ@+pPjwSME=AA=eQ- z1FKp3RDm%b7ee_%*p;#oys>rol_rz+I(+PIQ0X?D1}P6{Jid=60^Tu6al80 zke+{r!dAb6PbZsXm96YIE6|x*+Ua1*pwY&0(u+FmmJ$7{o>tG8hG=YrNhtH72uK&< z-Bd*Vp{ddfB}M=N|3kAX-KXgtiQm79d0Sk0@d705zsIQN2r(?#HD3`$Tg(GB)rVF| ztw0W|Q~GtmAOxSYqrQ1KHx^iqgFbIr|7Y-r=|+@FJhPpX;RDm>Nn+0q8mn$`Yt#<* zXk{MG14M}(n1$GX$G`r}rK$_J1?LE3jM0;C-n?1rUvSEj{e3HtAB${XTD?!zucIqo zZvjlIs4cIs+_XCs z8>^%eoFGquU(Oa4FFv5X2tGUh7v;yE0OA^Vr9-}7-)BWIt?Tn-9vgq*nY9Xm2jcQV zEOFUrgK%(egi9iRUvP%CZl-Ae&N@rXiWpFsIUnkGa2prB#@GTegB~BfsDL^<>h$t= z=A+f5^M|%v5|W(CBimmEl??gyqkJf&0uu7rAmf;GGXKd+)&aY<&gBDd%#L8hUl9^H zJ-&>UfW zoRa5>nk5USG%j>kJoKXEdz3Y6I~49@wS2f*S*`+#7zftn&0`_q_(yr2cza9k%V#cM z(#hW>N+9BwGvkz2g0HK1QHE4qSLw^;vA%vg;_Ei~ zi2Vb+AVx3#d%_i{fjWw84cLaO9e02}Jk1=*XIDo>?I$HV^SX^TiZjm-+QcrqxSDzz z%?Q$M(y9z7PvT-5=QVSPd+OMWG2(r-gO5|V1I%dTtIkZZT%YLX9Dlmxv?=J~8ctOrdCPNg4nWt}pBQv@D3B*y){DQ;-?^t>kBIgbqdde1RA zEA0}i(vs6j$8{1dBs3{CecID3a)PfPPCR%sByqQ;ucT}aEKn>OWNX>+$N(bWV zZFOSWq9EwEL1Cr@-}P!1o@XcCDTn98f#DHE728JA!{yIjY zCYx6Xec;}dc_)}sl1_CZ@gHN-cLq>;@3H`0F z&WRC08^3CT2vNJO?yOjFx(7-j|eVtR42lB+Y1}L=G zl0VQkhKu-{S$b2;n?|nRXNM0R(iD9w6az@q@tKbCf7Vk$>lYOYxUk2tu!%#VHwA|^ z_9G*yg>KutAJW@ON<8M9GN=k@5HfX&mVv|X>hF3S%#^#oZ#wof)+}|@2OY=Rw+do= zTcC2$!X@LVPlvS$a4db_AtStY&^Lr0sI+_Mw*PPrzzmcbF|~=GNak-%8i(UX>pAuB zz-9)r-hIUnzf0X#;gv_3DQ6GR_@8(9dmjaj2N3r%S~w=|;yWLNi;H7t^8G4pGJCr_ zARO~PVZWo|$|)iS^e$Rr3i+R3{@aP%_roSm0M72!*6Q6)O(o32K)*uz64#}hCBQMV zxf$Qw^G1F8Lak+Ir#cxWF|?v$AqoOHpb+*LtY1Q){P>`pS{%j;H;{R4ZTJ&^w1-z8 zJ{|RA;dCsmu(jgS(Lfe3r6-?}OTh1|WaYR2)CJtPg-UsG02`gzq^$~ z_Y#V&=duL26LXb__wkY;V$QY>FlDTFe=1x0dOOPZ6LMI(E9aczX1_ejgQ{i^OuH?AebrxE!EeA;**KEyI$vp!# zzf}LKj4)Y(AnON*s9vtmQQ)s)X;U|F=(q;~TFceO?OR0|8FU}(Mkn20rJ1yr*l4#^ z*92i|R+`l=;Mi2T`rHi#UbQmP*^us}Yiq}@Hrb~4_F3ofz8Rl$^P|jfnsMvsxY%llQAh`aLy))mseGYPQM>gUr`tOgxCv z$I4%xV;pnc6sRfLM>h4OTN7o;Kc8>A7HvhNzDt4ok#=jeg zj05yr^nCUO9%(UOR|&x~b&yuB1OoH;jBU|W#ML4^Lcf+*n{8Z)9A<|PzY2$V!X^t! ziSX|Q1I1i?%wuJRd6<~2)b6j24FS?Ar`@o4%&D6dAWg};pT0>_7nA7%ci0jQ$1%B2I>rpy_f_?2kUDEMAbha_H@B2nrMP(&h z_R+jgNlA0fpU;ekq{z0D%{H~74x?*tlWlufkB;mjDV*$@3BO#|`-|g#?LT2m-&j4J zTZ`BdftrpRiBk`i2;Ae3ptQ9TjH_|FRKCUwkjTWE*WG{K_;7#bcKUqSGO%UheF7eEjKcu1m z4PLKi`OA2V9(#zpwL5rg-lF;McNfFiUCc;uyT|dLX7na9;`^hOrv#iD@r3ldEoVgoBom+Wqn zUDzjG_UI9n$cdJ5Qu?2t}hmd@e_Wlle2x&5y^F4PKcFg?om>l(_xj8mFtmGE5*pYSb_)$%4J2_xCHj;en;-;)jW;4x>5hLW9Z~JyXlf(3> zWzVM`Yb0lK#xsPT0mc&BV zxV};jYL@=pYp-*7efIcAkj|{JWYfZo^>b&nEpPs!xqF-goDg8D&iCcs^-1!;4EH?Q z@1lA6O;1hDtOW%}!<}lMdBZM7SaNGy_2j<}_nmw)=)Y3b>LZ)u*OtlKf+wW+V&>_mk3d+YHBCM zm1eOeHfOCIYiYKr4CP`BR?ZROYVH)yq)ruBouot~y~l!r^|MtzxqC^uG$Sz8qyV=) zy5`-Tr##nfV-4Y4v}n#Xl+WpS*T=WFw(Uj5ml2kUWjD=%Pj?ort(y%X!^c3Oro1lp zq2x1rh&8xNJNBuZ5b^X>O-h#{{Ig1nQ6s-&gv^aFQ=OXX5WWB~iwQ$NHzAK*Q;CfN zmu3+$=?t{IwwXFjE7|NvSSeYy>q!I9#Ya?Ym>ngJtLLBJo=2K%l5>;r?~Fxl=EqT# z1o3bh73EGlI#ifpCc)IHP@P8e`^Pa>X=Z91o#zK@%%Rwt}Wa87Kr1g zozxq*5q`r}TyDNK+PZPeZD7ypbN=6fkd=KB*UKeAPqn_slSHAFv?dO&6BGFrAm0CS z{(O{!=1U-ZH4re1&nfv&B8VJEbOBtqH#E$tGa>) z&4)z2OOJ~td89pT0`HEi$u7EzqiB%72g3>=7taSBBb*~qGe<&%sSo*Rlq?R*VnWEa0XeCWFsEDr>~m*6x1y zbY8>owd$Xcf;MTkzZCnn&B^{xFMw^j^pXo6`DD$SJ>CX7(Y2QNSY&%QaX{AhC@_qW zYIo*}R3l!a0L}GEJGHaGO-x?mM->~D9Y68?$X)%t$YZo0VIVJ@2qE6Bc*NB@i~K`r zTi;V0C0H!?8z7>HOzC;AQ;aQ~Hb~auq9LVYS=4v$k?Z|wtGw~36H2)3k5_5L8h$j& z+D!ax!!k_#!H)vBDM^v}>e4I(+o_$;)hmPRCjCclHJ9KG^^VjM~#TEKJLZQ%VRAk2w z)KP=;pafkO<2Lp;`X~C24Gr=N+MKhGxO&h{f4mjH0zj8PzTRUiB|EAgA4fba)Oc;5 zQ1^}=#V$uwoh)usYbUB9KOog@eJ*8p+Jb!d9?nERp31=ttRFC*Ka-8Atc47OjeUy- z%+@75yHm%2cINNWB(Rk z$0NpecBnb-7mJF@V?Of){6<28>yf;V(H-duPaOI4V9K+J!sOGP!JVV*;0i;psoQ|@ zo#yC<>R+oYQZnrH3SS;K^oX=QQ{Bw0CymF)YfOZ@EpjfxTh!uOZ11UgG#08599u0y z4P077DUCnmcgEj;Z8)5Ka66s_bX!jg%R+eiDLUxP2{xEzQ|~hiI{WzyDjMX}pMCw0 zSZ|mzy-456Sq*Qm*#Xd=I=XX9+a+8l_+)AC(3hTogv9U(zl=FDhJ%ZGFC#NPc$H7U z9(y2d;&}e#6;l4fS$jXeVON0axV}1zfM}cK=hL?E9>JNZjxNE?u!I!ra1)qKw3_k- zElmSrC~Kho-M4E%znJ!Bum!J@B&E2B4+}Q1ZYH&rjPgOTV^Qy_s;jF7KH>LbmWBrY zmk+vXOeN?%$V>RAh?IO_MQq)=&`_$K1INOJaTB38)Tat(=UpNOIL7u?|<6{+bA{(Y#4oRcGmvS^o*s2mDR}{$7?Ma&1s!!9)T$n zF;Ab?{Xa7@A5L(N=Iq;tf>cQ+!4g!7v%X0cgJKDv3zyhe%O1O~45K{N9toue9Z5;| z8nN`Rjoy6yud@k{IjRl~8_~=$-NM3yic(&iDn+^`I!`<(?)E=3cGv@`16=tK2=(M(L zZ0F}Y%ME`|rVYMzc*qLOz+xGa&P5lj3xBktuZ5*o#Uhaj*io|yeF*!!uBnP9^hTbD zH!P+Yk<%ls&*sP~RODo=)QcJFuKeH^*e@OAzb7Wl(5o~f(Yi8&_iw{dQ_FcG_k%li zuz}H+DETVDRT=CJXWOuGrQy*bJkbQya@%k9`D9V1j!RuaDr2x7)khKeMnaU6(13?0 z#k|*kHLmwA9W#p&lr-*1W@D;y7pH>mB-^x)YOvPjcYT1kEi{$HI(I~%ygMUKAnDz= zbOJBe+$w{a+?#W!T=0dZZl=!X#+uF0@Az_`Q?xyMk$4ctvkjY}l%}AR7!|!rrQD&_ z{2d(ElYlJo^#<$k!Np&P6~B20DBi>PAWph&$CtJI1V^L?q(!3KXj~8_y5N`we1x{! z7X3zMG0{@3Mp)kpdNJbc=>{>gB(vEUfqo>s@)rf&tx(rd{y_Qpo!FC&i{l+n{!J3YhB4&1<)zHP&sI8byu7W<3f_O z-@P$jrh69!Yt(!d_NcpLUM%P94OK28`B22P(1f_+#668n8}ttwX%^ldlr!<^sp&ix_zTZq=NI?-SE51q9;753Rms|bg zEBM{lSbeB#i4Fw=Av@L$#)isS(pP;;XEj9aXnUVi40Bu++REEID=W3I8P#k=?eeFR z=7ON+pN&b&Z#`3JXG$3+r4^vS?RHAu&-SFfwkib)FRzSUNd!OtIsML1E3s#n;Vg+cuX8I;1S*T>AGHc+PE{u!|%KB7x5L zlSJmyd&NNy1bdDb!na`#+sPTpby9{z1hMftXx7U~?7pmxj|xOj3Xg+SE6G>aEFzrg z1h~K@Oh$4xy$lRol7;C*;-cOXn*p`7y_#Ud>b=EEY-PL-^Z>R0@veC}I4^4nXn)eE7(;g?) zo0O`ee5}Qs%6E$+Lu9~OI2XY&2rgJWBw!Y1lGBPdUs?mJ!T@T$a z4tg{Ep}>gv0$#Eb)JQ84c4n4suB4_?ibJ%QH-!YCcKIE(hIpnP6aQ9UKmoj+!3#idbP1%5g z&_~$wdo4s*l4o4t<#oYNO-_$|@^ke^p<#JKlzL#LyMhY5aIOlqby$?^4^BKw)mu2 zqhXXjR551gL#)*AKPK-8>8g=AY^MF+>G!$1{baBb78;hj|JS&tf@QZgC>Q)bZ)yZf zf5bd-fEDKEXmC|Jn(D@ZCPc!#WiKw-@{c7=ytt;=KN(lL5_RiZBFuzu5I4c!)JFV? z$^i~G>kkB;@6HAL^02Dd#Cuh&Wc8tD+0smvz)QGAFHUA33GHg7BgE|>!wKik%(_=N zWg*s`pX+kmq>O~ju)}7=1fLR_*J{WRx8BMBQvIxQx@Xgx#eVnOEjlNyoh2L@jR%dD z!j?`th~3M@y0G(RZGZ&^-uvtq<&Y>|2>z|vJ`?W_VwS|xhh1n$%{@up;R?5biYm_({fMT77fafcuQg~e&^vuSq_N-FB0M)d00 zMPH#s%itg50DOnT1}{o(qHC`*LAO}qZV9Xc*Y6~WIXRuBYQO@_R+38f%mE2Nu9ulL zQrZ2S+tI9O2cgdeQIr;%IW~4ma1k46ynk`3$5CJp=UH>qF_tZ7xN@bhyl|(>`Lx}a z<(Y+w0W34!a}$(zQDNS6^2AHnG3<>-3XZGvXQ~85o=nXBi{ZducK)ukXOW^;9GwET z$KSZWI;&%9L|C=Y{^4m*UFzqH=XF-X6z|W;!noeIR&w>?!p&HmJ^j4lI1*_}y=HAR z6olxM<=^M-u{N8^S*;b0k_dLF7g(@qD!e~-+;Wl{FtJ{Xf>(U^K@Xaf|JsVo}f zGKy;ipv%80ku%LKrSq|r?-?a`&Wsrb&O;G-Z>fz&`G@GA&okLS?WTaWjFXuXmz(xHfK zKk5*LfG;T`!ZaH#)3qL{sUzlYBYKk-f`Y}=LmBHU#=^+Wt*Iu57RK3O<Ie#v4GK zqj?0OVC+G}lmE{mQSINq@<w8BbU#8`LNz5$fEq@!4HZgy|Q)r{M8u@Dj zhS>LRx%M1s>7VPSH4XRqUkg`x-6}sS?OCecMU1W=qVS#ID|?48t6fd5yW;@?u!C(k zPH^d)&p!TQTJ9p$1N(n9Ny_-)X9M-L{+iA^NL1r8JSx*{F%RbG@Kmh?*}BU}VB5`H zkuTiZE9o8`qw{V%wY(`i_!Bgh^h=$WASM`IN?pWixK&l>K}Hk}F$c4_q3C~1M6Sb> z8qY)aXVQjjfYi@GIv)CGRDvzd9xhH*JiYEZRa=L-WL<}<+fQTXHK|oXBT)b7T9otz z1<4OYI-?Z=v_AzGV`J<_9yGN6)+`Q5di6GTFeH2?A#k_p`(aR8|Iw0u#0$Cjw3If4 z01wjk*+Ye`yV5HlYhyp!u)6NEyYQ4Q`Uhq3NeW{ky9eN?BK;BLXf3C#{?GRERkFzD zN4+h5{PRo%Z_f8voL6(W8k7QMp2GVy1^g*)wxXiWGHZDczuI_XQ1= z4E@M94X=C5prFn3=_oeDGU;m;hZnDIh1CxY9zzWCD%}D>y}P~kYf;GhrJ-YBlx$V* zagvDs-x|fVRs9opcw0u>+BMBP>lN=f#-KOMiOCe+Q;tsmvVjgcc&apOm1P{y+jQZ9 zd|&AAy#_#c`*)a6#6`#FI+`?}3xti9rdT2RVRK%SShz1Am1lMa#y%Eo` zFMT(bVjPJcHeB>x{a(<`l*lteqJKJu!dblB4#(u1qGTt8J_MwH642`U-aQo6Mm(@F z>DT@7>DaHO6J6c+9rs&Z_Rhsb-|T&j%v!(LYMS+tyv4Th#QJQ*mUGBB;!>(NlH!<2 zJEg*3EM6okUcbj;T#OGyt!Pos%|ACNnIHtla$H(F#Y$#R0`?AD+)bwwikFHOA!o}Q z716>Cv3XUqY{Jqymk$B32c( zkV6)bPJtxfQ1A?Sl*WiAEt&+W57_K+mJa z?P&bhRz1WpFr|#uf#5pb#1~H2NC-(y^jZWs9|lj#uIv zsBZgz`c9kir>LXBv}N$@u*kop(v|G6wBF95FAPPHZyQ5a>00)$ZZH~;90VEITp4-? z-~!+nl5A?s_XpNKXAm1|$Aa*Vk&|j?g-oF|aBrjTL$IOox+eZ4(RukPYr^SwcRnf) z;?EDf<`x!zlZIp=>r&`HK}KFrh}kX&HeSA`W_oRIXpF=vKiRmtL*K4>b3rEolHgdE zXS{`2clSDrWSWG|ggycc{LwWXCB`k*KgGGPtA%l2 zA@ew`i)DCDbPkS>tKLob0dm|b$GZl&_zS4+`ni*80@ zUeA6P$IHZzD6b}29Tq!Crj?^kA6zFsgEf=q`or*<{w96jlh>wOj`BD`^c~-D_y8sI zv^i>Ce(QR@W=VAYG+pW8sPlmez-{WzGg;fUrX8#Yg@iq!uzNIpYdnU}x z%;bX$_?xQxTlr)+2aF{bM;fL`Z2CtDTGQ!+-1Ts#{9yK%_Ba=q+UvYRxV^Kj*@zn9 z?6Z9~u6WECZ@XpG>bR!N^@rWQD<25Zf7AEaq-!d#5IZ|5>wG$=n&>Nv z=0s~~we089($@YoB~CSE^7@PImw)G6;bEWY|H()q}UYX8?km+}O5&Kum4t*FC zqS1Awx#8t~41+^seetK&oqw|jy+wMy#Mr5`nh+=Ol?YspEW%ourOXwU{n+) z+3WPz?;2WlYdi`GudidQhDbtf+(yzkB~MdzH$t!tHHN2^w_T znyMjEEG24?h>p|+p~Ra<9>&-)%zy}kvathmQvyJH1Z|%Bdo8hZUe(AF-W&f5ueC(A z^JMq;%N@%f1>M8?BG`04^+hRVUoP|3tv!YOq64TCcvXeG*)4BJ_?@}P#f%z;W6$=P zDxI!Q0!q>Df>EBUIp)PL{&hFT$69#1lr8(s!)|=W42}T$n=0!#(NjbKL<5L|C?rX` zpB+Tgx-PNmU6aUqnDjTEfRW_&n-6p!@U17)V(3NyGA>{!s{3!)Ze9oue7D#sJygV2 zK*-Z!h``>?%x($t0SxDJbxS-2g~miPJ`oZdi;&De9e>%{+FGA|o0i{cpJtWogWjiW z49CK71)C-g5IIrH5Mj8(a}_Z~dixceUyTL)FcR7!GRm13@~rL-GP!2d zOo0D%X7%G?i`U*_KLc0(b91+dU2FHvuk$63+3aOl4@+sz+yMjoT=T{!>vK{fPqqy# zg{hjY8bF^ep^Tk4>d-Z8cV$u3buLhLj1hqFo*RDJb!mxi z=V_L_-?9~V5%o9@(o_?|U8}CLab2K?ZBf5H{T^-T=MSBd|C_ex5B2gZZ_W)))C|l{84Q zSDH^uOal>=x9NP>;#0%{f0c8VVYU;@a!y@@Bb-sV(ayjePIVYa4q^d3zxB z3gMm;TF+IKebs)m`*t8-TN>!q8n+n^^3VXhBqw*hY+0mA=f9pcvIT!4_ zw;t@#m$M|~)a~bVx<#9^;$)Ulst>inSkEMxZu0XtkP_Ql4>^5MuCaEf^1jJ~Y&IL7 z0xWfV&|(aK*M1VnK2d2fpF!TLc~tupo6olb;vX!@T>bcJ;?cmzXpEMB<^c zEwK(z^20Y+-Y>fg6cLx2$&*K3J}>9@#Q@T}Wa5Kmvcu+p+2+%?ya%7~*@S0L=Dx-C z+w86u61Wr<`X&cp9d}MIda_M~e+s za$F-VR;|;@M?CUa0|?K0ftvxdc$uNi0@nPIae+@eGn$R*@6Foxa$45_I3&V6)@~!7M-}E z8Y8r1$&x3Ek>^`qVSG5t((m9@W^2nnB7Oq`YBn-CIx$zvjsYT9c}$I>-e*6$AO5gp z{syW#YeA)XKIj1J|Ep$)<#1Jf{nvs%d=B5Xv4#Em{}9QBzIXZLe4+=f9Hw@rrH>0u z2yd#INK~h(le<&b5U>0wo5WOIxLgM*b$X)*)kGDKGh|YRiW}9zCv#)1ryG=KpY|!Nr zc|1B_(vdgZSe+73rp5u~{4B{p*rJy6CyHbSB0`xM29;%V=lCvmE?JME3T*hN;J*S) z-|%K8y}vtyn5|>~ipm^Hr@&tcW*w8I|99=i?(042a8VzN>1F}tGyf3Y^$j>9SGOKg z>bnYDzG$#@l2UQ725xIM<>8Ua{6wQTXuU$c!<+8?YlbwMh_>Hm_O&^8ZZi)0%*2;c z5lfqvpQ&+bAdU~k&O<}<*rY(Zfnay9O`NMGU@cPW+l2k59*M|v_gnu0ZFD=ov*}9( zINmw%HM&B5y@kM8DMweb%@LGf!S#H6_)RKMO4>Ctu_wxL z%>iVJt1#-uT(6Dnq+HMW;NY20upPe|iDzW7N%lboo3IJDU2gId<7x+NEzP zo=4losFYZ^FJYY2nx|`3Mf}6W9u+e;_E4N922!ISH_5BU3UQ^#PnS>d2A1(U%&aR~ zpyybeYg;Lrc$SWN=PAY^2pv*<$@(fZb<|?q^scV!P zll$2>EREJlMxW`qH~*gJJvm)2lNn-+#97^!nvxFiF2645*Lxh_tv0;bkdF`U6U~IE zRW1=^h3W|;o1DWEp~``L*<_gMG&%DoFc#;kd6!JwQ_ zE2K%waTUX&ru$~9@K#uK*{<~@hNraXORBoB)!ewyR@PBcg$4S|hQaeB_6uxy)q|jm z@_?k1{(%L`u77*+MuH8w@}l9y(eRr{w7VXZBUnj>Ul{nt9eq_OOn|R=M*`q>#W-)*GSRrGHPJMD3#Nl`ahmKNmadtz=WA>BG4E3h0t~9^5yT9MFQ_kYw4AfrSZ}YB zF46VtkAlXt(Wca;Brae8Dto8)0<)#{ik99?7yNIwZ|%I(psYhl?W-}j@@8hp_sGKv zfCe}v$>ss<*BpV^bQoAU=|3x1K&;dZy@iUr-h-$VK0*?as!dOa*U)G%0Om{!@{*nzn%x)0hw z2sYK0U0=07Wc%34t#H#9u-lm|l;9H`Sz|btjY?)jo ztCgfu-ZQt9@jaBsWOnZ;K`OOR|EIr4$RQk!I|%W=3$`e~td?sKL+iS`B&Aa3@38^8 z%kQyoxab#9_U8n(j}g}KqrA}8$&8O|xN?V)MR&m>nt)e+ktNB8q!#2T$M=0dzK9y6 zNMpM40#?EeM~0{e6%O`%fz+QOc7|1#Wu)h0o0uu^qo!x zm&goxo7@O%$!C38?=h)}%q`0d5MTw(sI`!m5UzHh=9wQHeA$4mvrG#;?Ta{&GA#!G z833~Z=>L$QQ%+hlKR_YVSwxQo4bJ)q|AJl1^i;~pyN&o6%ziE(_B^$-spBhCA1%f$L*DR zy~^ikp`$9iaEu=#1H0(OIp1`*7e`NbXt+6Zl`kz#z2-o!(}!E<7wvg(2LBDf@^GqC zOBQ`vq#_Tw;TRKM!J9NKMcQjQds@RYL)j?;?iULHK4Rcq{25rqH!%nNA)^SuSY+`2 z)z3O8{-;zr3#nQz6(3)DW;&O-_}1o|Gozf{tlh?5jpmB$)r7q3s9$~#wCZ=L%D-U&S zfcK|{FPI4}Wo>&lc7eMp$xvyax6UxU5F(~Ob1U|~2aM>&(sh|F540FmcuTFTPZj0r zg74MsCq%BJ;$3G{(ZiCADgF_^cbhWn-GF@``s$Ts0{>N3Jtd4Byc(9=Yy27BM(O`? z_LgCF^+4O`X0uD#P~3~VLxEz&-J!UCdrRpE%km_2ZD$D9)qScmYt(4-4=F!XxJ9%=Mqzwgn@w0B~64; z4bBoj^tQ(Q%ElH`DzbkQhluc1p?UL7PPvWRvLsRPfo0C2tj}^`6AqlkxAc1MjbtBi zmEcLI$1I*i1S-$pwE|bi_(kMS@9?=IeqhT9r@CZq1fk!fY9Nr}xX+}sW9%mMcS7Rj zI!$Okx5r!5&n{*87Xc@dQ%ARMtwBC;Nb|Jr*k>VGL*S-k_)C}cb?bc4f}|uZ z`E1Hst*Jw{lN-Ef3rV{|!@NiRRvj|gbnH~e^071)T8fjQ3yq!(nr zAIZ{k276U&8{EY1<*rbKWnWc$p&Rlz)1Sx?yo)ZVjr2&rPg`&J3A?)~1QwXF5*g!_ zmiXP<4JjtAO`)8LH7e`#_jX|rKvwMK1M^Ys*XLdxWK%pIWU8*82dTD4t~cR`G(4aAlOFvBynI4uW!M>eyRt1sU}GYHO6xJSFJX`2c2J+0wB z1IeK1Am%A#=`Zd+Y#@5YQ-)BErp3*)N++SShKael2O@~zl7RAt zLbnD%cocb<=wE%WVcgV89;g}ijt@i*MG|Y^f$3bpUJ6r->g9V+`kVo6Oa|Q#$JWUY zUG%x8ChbYBFL=0~wc3yANOs0Iwp6a2IEVG}MNvhx{DH_qhShkU79MpAC_T1JkVK3D zTmt(?_qI7nsdbEaV%}j+sPK56Q~Uj3&QNnO^J*`AZ`Ites&AJR^gvMp7?f(sEAJG) z8pwW&4A9&J=auV1 zmi{&UHAct?rFXBT5h^rOxTz+oSh1ovZ~@+2Y}Q3?Bg_}B8_$1BHjBM??%Dr3Zf8pVnXnJRLGS&~!|eUF z5!&qxbSTanV7ONN>x-bzLU!;JoZS#5S-*}U+AL7Jw+F_);IcZ$TUC>aY_@G41VywAQ3bXn=s{R%Ly;mp zJ@`DAwW-F@V6a_%_*49lfJGsW^^1JKR8Ro#dUlOFbnB6HZk`5d?y|7@|0VC-%mM;I zd@ipAaOB`zUx&ko*TT5}{|dwCCGDBN@&&)wWxW4pyF7+QUZLL(W`%sh&JFMD1B|KR z02A<63BO*xlY&T|V=6#On>3`;S9F1UZ5;cx@l=Y_JP}%b^FtYTj@a|pPiXW5J{OV7 zza_RE;B8Z`r;U5EBni;UM)S+#%XxHppuUaJdpO|D9-tXMt*Jbxg(Vt_cxQzHS+X+YIeqD_1Nhsz*AO=ujWFSmkQXIpVY^c%~M zom03#@tw5@*~L;Lm=Iiy+L`|}vxnY;#9=qh z1({Y?msPxWP9!)d+2b~S=B9V3X;2-1b_#)uu!L9n7X-XkA2+e6Wd@REVee~;TF=Lq z%H0~YlS@Y%q}B;QnK1YI*~~|i5$1$_hE8R}pXTgeN2D=L6Dgoa0ty4|Qgk{mn&j`dvY4{RN~$6L_a6eIJ3%)Smh&*7m zY|6)EAo5X z5h^P%8`~Lw6z9HcbGq4}C+~Se0`)g2`Wh>`Z=CXkd0vEXVGeLES?Gh46=<1Tr%DnA zx8(U@uy427_IOwU5}D~|ZJ!)#?Boz7)(^H={j2#Pr>0y9!sAg=~AH zr?BU@+Vw5RqtpbcSng+d6`l@ct7tWv>Rnu>Lwbmc-~B5b=8o^{34&4t0nnxN;Ytqh zfJfAUk)A-HhuFNiG zyTRkdY5dOHqT)sjaKDs63EBBHcHNX|Ed?>{>EAV1g)&}MtV?l00t)|Y(5^f!bB0Lv zcLm*rwvsO%N`ldl9&@pspzhN&r3q!Hp)~RV8KQlJbzMk+m{3?3fz_eQgKz(!!4x{oFYX376AN&K+fQU1$md!jO zAb#72AzpE+SRdfeDcjM;ekL-&Amk3NmjjieQUQfgVE;dbL#3AYu+%46YM2UEAuv{; z^CbAj%Oes|x08HN+TW*_YZO)yDRKW}gtJJcLGhCm*mS|Tp;9#wT0q|_u zU$vea%nykj*eu`t1MDN}X~$5Z?AV{}o?FCACd!Fk^rTi=hOzfhyszB*^>nXHUy+Y{ zgMMcs?Y?2abGvPsD%kqk%ar|dK$WqQ;QHRS-sDKMA}pGB&bBNA~opAID;i z;7HyowYx%FYa`~hejZOw*ScSoYmK_*IVn}CCy)_aYbLFcp1SHbJ$~q==1=U5QT~H? zo1tI*c&ilNPm7-kijD)jqyCXnehg6`&1^RMh8oX0ee#xs#J?rtPsob&(0u{rOA|0C zl7Xj>V6D@_Qb(mfgTkSdex(d_#QZv*sb2@B=TQInsdF}Y&4X6aCAYT@p6fN2|DzIA z>tf*go=oA9f6dmsnW9*o3QOq?w@<(3>ImjCzwe-@NaxrASQvu4Ssv2+qyuj?v=~a>a2AXh z*u&@1?k3IWiR-rla(7g`Keznrb(tN@?4NiEA+4=Ch6&3P$*~bg(Qh?kr+_Pxc6^a} zBDdJXViq7qMG3BlvtguT{`uf|@yb0=6!=jxp^YiPL|^T5Q=#4Ip>+%v_|o&umO0xq zZ*gZjsfTq_Qx%IXZKlk!`K~zwRV9s|?Be)~rrw%kA!@znq_mKXfT$wSz4wOpZ)cBn z+dN)shy{2W?@D{y&n|C&1&I=aB(O$kz4Y}}?^J|o`1SYNlZWZ%KXka=W1(+yK&@~D z&Ec2Twwb3LCdL@cTg58JomLtjO&QDM0D{#)cL(X&_n z18W4v6Cz=J=eWT8sW=Mh@}Q;On$|XPjm7$oLY|mbCkE^*s%vV|7kqz3QB%c zKYLI|j`Y?`$b2_o)CdpB4$>KJJ8-_^=oLY)rf=UOY-qRDG^)4fm=-5E3@37TC-ZAu z-jqjG)9=KOy%}ftft;=518MX7VrpZ*LkB|uB4zf&M8P{SXWN9X3(W)AV}nSnKMWA2 zZ}2^gr`wDbduoMGhbtnq+UD3mD+0EMmySGt_S$cEPy35L%d7w3ca^$F8#-+%u69lM zrfM?is#K}1tXzrW)y_O3apc0)b^1t8;IV(!dZ!We3QD(ryl5W;|!<8VkRE;qZg)LK8P1fr-eN z!sC3upR{E=PiA^PPU{#=A^x$?6~;ahv(YfT4jJ%FDHV;N@yK({&pX3F#L zE^AHSw@FG}PJsTs7bM`K?eDEgmjZ>?OtbR9t8egMVW@G9RM4}HxGJ9DLg$q*cfff1Zr6_4}Iu|v`PVw)2~MJTtb20L`a(OUU@F1;BT!zAO`1J zRhvktTOja6f+{U!5Hk7da;3@Py3XkWZLY8rFXIERYrT3T37rrt06ZFCVwg<<&Tj}j z#|TcszX1dI$2`m$69@fC45;RBI@@Vg1W=&m3u_}i@I`^I^tSu8@BoGV9x*IF3K&?m z&=L3R7-LFDJ(c%(HS`tto2#ra6+r#Be`K$?FoXfcl4>+50*zat1C2ssnQu9OhHS?K zVzQu(-vNQO&4VAlW4|iND6KI;w$t%dNt`>p2Gp-gl1f-I0+dAis-(DW7v5JbButG4 zn1Rcw0VVyT-9&Cq1!_q*4|(W=w7%*PIN+Mx_0?a4MQwtJ}ncq1QGGWKq7+)YOas`wRYZ6(1!x1&;#7T!YCQUfl>-2yn+< zI&+|gpqv!#P#`K>5bAw#arD_!2rZ9 z3;>9CBHD*ihmvoZ2?=a^sx$V&p{A`k+0|dS@C#vpVev7A;*<nzTI*1H7xO!Yx+FPR4G3|2a}9ENp1Hs+cMMaZ#xy3VHd8r8Co(AU8%ZRW+I(svRlgO%ZgZRpNPiyJLrWO;0 z0rf-#V#rjnj8$)ocW|Tc1uDQc7VwupSvyqbB|wgK%ro)koS-nwP-38}L6QCyyFfwS zO?jEi)?q8cj;fDi(ay7}b;b8eHm6P?>X@>!d>l$9DYDq%@hYkJO4Cc{RZT^qjlyZD zp%Tm7LNZnfM^g5_+?=jSkb;Av{nUF!a5mWOE?{x-tqjZzU7uUeQ~vTY=S*p7jY3(u zO{E8M`J8a43^3{)N5j6{EybR`jt;hfEKkzO-YKhL^8d{R@RppjUK+JHo{<6afjx!( zY`w7~^efJ`wDfi4K&|~Ipl0uDo$c+xZoBrJnf9F>7$686gwCyB$|TU-B)?5lkbbj&*DjECzY$h*p7h2&Nca8Wtj~oKsK(`$)Ypz&qqQ(CeW&Ce(T@q z=H`m{E8NCgreCsuH}&Ys)L4`3$lf=MVcw3t!VUKrY(z`g)sVY;gA8p9k+!xc#nXN7 z^72>kT|jQ}UONuFm$Qld=-E%y2u#|TPfg`Erav1c%CfVwV|I7hE!5NmYIdH14v|5> zV<{?5zIR#JF|TqyibCgV(N>#&Cpw$oBiGQW{VS9j)iJ z#$G)kbY{gB&6E8kJVO<;NK*f(rs-lzWa6)YVZ zYZ}wJI#QBd+hZ>afTo`G_UCN!ekO@hCdjapV5{EE zi6MUO-n6M-c68Vk3H@zsX(`eE6S3-ZqKJZCLv-tbE1j8rgGvXZ{n!vb3hhw;?Ck0H zK@^kPs(j_#8H8sng>q$ zKZ|B0w|^HGKUrLTe|`%2RjjozAtTzyU-Yk&%TU_LETxGmn7+Xx#_wvApV1bo!ob>N z(>)fIhb0zKkp<1yl1#ilo?v9yUL1!{yK!`=P7;5x8v?=W{-KR$b{uBz!? zLhKK{LUBpU+5PdcP3_pTlHRIitOGOcj@U}Q?Td-!?P=S@w3E_YbMxDN&`9dJUp;%U ztmNO#o2HF36jp@{`c=#6ye;w9*mD|HrMgr^6%2oPD zc0#cqSHMO?)rk=wcQXQ@aFf{R+*oP$${UNkKg`4>y#)?OxRD+ z0)~Bwf^|nmGN(}Sp);zHEkc1YLD14{-4T+~pEcNhSOeHVjT=X~5#rLUCWVF&t|sAa{cBCzsTyozELk_z-;9U4bKqTbw>k?9Bh}*PO^6XJ`(8Hk^GtlsSN6ose+NS|BgTiFrP*FiTyx+zTigK*$f0@v|b+&a;RIq z0F0f>{!4Z*?|3HjhnOIAG{q%#*rM8Tmu)WKkS_U#J<*TYW`QMK(w#G#K8y1ScEeTu zU%PmYcGJ)Mubu<(<{n2BK{33t9hRN$zW*EM7q>X&q67 z`o29cTE!JRB{wWw=YPRLlPqG~tO|X=1bGI(XQ?2IoMe2lMwTsfXUGKUyjy<{`m2P- zcx3P@gnGJ&G5xAP-X0YLhFx7_$W7hgk{|6qhV6_3!S!Eq;i>Ub=Y|EnpNsnLXW84v$im5-t`@*MNb79RJn+dUN5#7?Y8FdJUcRV}8N!uD@=LMGg zxQ3s|&;)2G4t>YfKi=fX?CTkbydq+{tn&nU_QnM6}#c-c;$(8;hVr)^?{pU(oJFHLwG6C{#L5#Nv z<5O$<|9k~5KVn`$T^hueqFB?&X6sd0h2s zfqSl_(S^55ia!64=enj5TOmeexR3yjvL zHuhnE9qCcKxPES7y#i;QoVDUoM>N^#G_#g23EZx*0Fl#Kgs}0FoWABfPUo%t~2wDB$H13-xJR%R~Qw#LlEfHT&A^m!e;z@LY zr;|&XO(4D?mxC2R;JvWgn8n~zU7kJO>uojQz?rlWM%<<`jtKYtrcXF~LWzHdoRkAW z^KqJkF|VP4LE$ME2dg?=f zS?3VV63t++MBjjEcIDB$`FrNeb*5m5xZD`6_wJoNrylz79G=WYaN|@sXeEI6YOqTo zwsGoYUn2a5b1*EV6zLNjP7GF(y8=Z_kuDfn)7=_M7KVM|nq90t9N~;|+p@GDsfmd) zdo)q)tQ~YSfAHq_vA&Vsa{)-b_OIMGYs|$t>VAGLv=4La?VM`8-WjVLn(N}E0$_{u zzJ-Y_9>(7st2IGU=rjH8?z) zORj64z9l=(@7MgEVY`x=7t&W-x8+ZCy%dMg+Y}{xkPz1zQFKLQF8@xB`(I@d4z2J~ z1`kZ3C~hn93AD=okOOmId{60ZOK-1`mp7}0$T4TAeBpkO2br)Zuq$+6Jiy)+BPsrT z+B3iAbFFq8@5461Mr#!N4p9~1v+QffSH>K28l}&W;LxRjEv_&!GS)9!FH`^Mm zJ!ck&aZ^&6tF8A7aM5L?`O?TOsl9-(J~}mU-~kK-!S-lX$RR7@q;A2fvW-Y~P587-i> zzGrKr7;hJ_rX9x}D48ALosZj*XXdN*Qc&Y}Ust9xB4@$8PH4UViXTAmXvCB;Ky7B? zB}4!LE;|eEj0Yu%5Hcn!=kPge@%!$bG}Q7rqC5SgX3RE|oR8$H$^@WAQcnBZuaMj~Vj)2p_M8oRfjG`w_VnebB>C8`f_t1J6(woM`_@3$xBcGuv^*x@GD{U@6`T#&4X zFJsI*KyHXnF~H@=%IqdlTz7la4fjq7?08eyK7@-)+DUw{IPRw?|KNM%HR=&~F6PsM zlUInZp zfRTeLoqIKuR-k67hj{A`uA=|>Xm0@zhU-d~0Tn0?{z;=cuhlRXt1Za@#8G-K*|mUk zx{%G=W!^#NFC$@+^uA-X^^8VVNFw>W^<++dSBg)_|8?N7xH2PoVL>ftzKE2@lfe(+ z@(-(PGLoO=vS)M!Vnqs%ytuz~X}7R&sMe*yUK}k?UpGXV%e_pRlojE7$WNWJ(v?W6 zH8ee2Q;J1tq+YztP#;xmL!;!X+U3oh5txZ|y;t<(qtRA;NKA`ix?t<3jmBAo zM&n{8tOv~dLhX}!>TrAm@Gp|}d}Bv~PaIeZxNB^RughE7LHDA_ zjyhfv_&u!hUhOsYqj7?DSCl#RvFBp`6A<(p9eWEz(W24qWWHfF{Owk3T*k$y1m0gE zqbctLVtLNsU%Wh{VS1%w7<0oq0M}3DY2an0igf@F%8Pb~=_&E=ECZ=@K=5x+)Q&Sn zumD{$7;7v{kM;Us!@l=bAU%iwa=cm4Bt^;6Oj@~mK1!AYcU}Myi4{tW-$2ft9?AN4 z6*n^BQ4n=-KQ1#c$X-bS5V`G<>y#&ZUq8FmH;Uu&8a)o7N)P!1Sr#QBqR%RG<{Tlp z?HvJvCM&!=%bZl}FhC$_!Y&sFg%_SzRN^%V8=im7MpesqZ?EBUHqtnh|J*|7Nyb~WDMtwr zEJP(|-?tME%0$4@(rFcRZ-W)40lg80=FJD>&Ri+Rm-6qgQy+#K88w3L@u)e@f0HOQ;NzkM+$n>)*o8os#T`(N=K;)Tonhamt5TyRI z=&6az!e|Ra(WicAdDk#C+H+9{rS3%e(uH<~l;YO_#qD3Bh_#*5vYiX(j4oX`FPghu zkZ5SwCc^d&g92JXY~w58^=JS>$F8p-L|`2aiYv1tuQ|1Qx(XXv*yc=Kc75|^bz6}* zWS~LoO=u&3D*EsOaTpYy?L$9w;;IgXe%T!M`HLx1t3I5@b zAWNbPzx?*I+l*bK`c{XkfguC<49=2IQLVLU1&z|(l0*@^X^cOYLgBs46_tz+E#qE| zxp%V{K5I!ffEeuAyzi+{G5iPM?Zs&{M9z_h+S|lO!t@pq3GtOf&@~oJdeuelsCtZh z@zE|L2b{Iif3JfbKk|@UF|q`9ev`B>wA^fR1y9&b;k_amEcN%09w28!aMV%g$0EVu zOl7mG0g2`jmA{0<#p;sG-%YcBgzi1s2MsQ5@01FfCk)L$Ep~m@BHKyrp0yBiA?wKu zSl7rIcFuTRhWIZL5y%P*Yeb05wF%1Do_#-b94Bn!^stI;d$0Cza-w-sFkW>+iF#60=o2r21mBGL={%du>MfT*FRt5z4M zX{sX|&Z)K;0IXw??%RqVU~s`@5pE?Wbj_ynv#s@i<)a;Iw~4F<`_#&O<*}b9>#J)h z$(%~H3x>0*PRL-_Qlh)+zMGCRZteM)Pyw(yoS{zi{Vfz1yrPHt{?m}W#=Cyb z6}Bvfhr^qj*-Ri_dqZ25AYac&&3)pQfen&ifHmP7B9Lfsd`O7JdouoCN%m*mG(CmB z$R@R!wPCRS!w>V+?q6kH(6T+V|C$-v$JrOH-r_@*b+?OuV2nh~v*;S~&4RI+a8W^} zJBA%2G#MCEt=o5>sMcA*CaEQY*J6a*5*;Y92_K1DZe-|0`kUYU!Z)c`C-iaIhmn>;_~L~&|^ES`WFT97_3 zM_m%ugt^tFcr=hBFXU|U&g06b%>@Z7T(<&|KRZIqpGWexplCR#SS3Q>3YTRs-uKih%f8kww%P>Aw-Ta?^r=j#&eYs*)5t2Cgc*#EI zmf6)~Zw56hRtTLz<()@Gmhs>UYwD&69v&*QlE?IR%E*7 z2+AzG-XA#8x=3r3AP9;;UqL>v6P<9p9MJn4=MW`J3Zx#;vJGyXW5O)*a7H+9PjGZ= z&E?eYeJIpCYTl_J47Mg< zQFxW9*~-Yq3&P$=1z3Xe{aob(lcjJQb0ULuy_*dBRje_;be`Xvr@IRq?e1BAhWQ~k z)5(*oO=zm}U+sTd?r=#6Sz(fNv>jWeVgiwSlwe(Yy_&^q1I||?=Aso|NN!IMv&uM? zsrCer$C881Q8PLi-5y$d8LpV-Q{0Bmd2Q;F^rV$22y=61%v9>OGVgtT(zaZfUXuLr z`a7?JYPr6~V~vZAHn(hs2;R9-L(2pf&s2Am|Ctu5X{h6{eUSymmK3w%`x?rNs+SHZ z5{Y?+2Qrrt!_Eh1ame<12!uz$2QFRVwsiywtJmPXsH@Yo?`!emE@lzNgxrFc?JfAb zt5M$nXVsKfgoO1?2{(W1fuwdxY_(Gq1kP!j3y1MXlsTx-;KzpOMYni!ZMGWs_)7~XwilZRO{5I1HiZQk1 zdw)OeYeSt`O0|gTsE36Z>3Cl7-9MFwvH(p(GIz{;U0LAl*D*|lC*8qS8=pj{h7wLE zO^rrE%GBC$|9laX=0Wui0Z(sKA-t)f#HVllImT0aXw{azmdx zMr){IN!SZI<992W=R5mYni84{%@qE?#zOhm1?+gLhF)Mdz(8)n%SX?ubg^Xu%IH!z zWg{H&xn^u44)`xQhH@}p{tUEaiS!rjZDChnRq%`}BKs%7J2$(9B#iAxyD%p4=SR*Y zpDl%XBxn6?oKpihq&8*mb&6ml#A(bGfn&-7C4<54>A0C{K3Gzj$)#DW_Ph!poPA+- zhR)RJ6S&`wv-$g9gkig{3*~sn43n_Lu`AZ)wPqZ z?%#5TT_qN4FH~**8e<%~Fw#6)=M?|6Kxky+wyONIq#aqY(v~9K*C?fW3FakLx8mtd`ij= z;egIdQ{x-PNqkxg8e2Xx->7>@s!l7r`|FFVxjf~I><;-V3kXqgu}xWZ-{gnUW|neK>4bF2Z&<6E z)?#;Ybme84lY0a7v2aYsUV$x8FtColk06UfBzT}(k_1Mg^e(=YJ@&`WEsJ$f#SFs} zaAevKvg|_&JFaD?Z$L3>cm~cP$Rb(i>?Z*nd9+YYv?#Q^)ot32z<-#uo8#r;=1a&W1BGVY*aJ8jVKPKzy(D}_ZFpIa9`+Ks7 zhsNws=7Le?s?WiT=fE_tZZMlV1mHTakrdWgP)PJhr_{(gDKG(`L!t4!U%4iNVAli_ z@eWM8^==s7l`Q1(*P&oUcv{?FtxP3?s6C{n!nSPiS#~>NW5z%r?B7zNLMk{tbnO)T z!H#++RNPNGk6#26C%D~^YTo~(aVuk;WO?2OOojrHWxFyQs(~dRELB_x=;62RB?SaK zzDBpi+jT@lU;c-&XZGbzyd`c3W8?y--;^J$Ce zUy+?K9Q*IU+5oOtZ)`=dWxjsBQ*XjY@EU29JP1IFJh7n7uPA#Hn#?apm#k@nmaR zQ_YcFH}mvpKh3oDc-8`vZU*(Fs@jh|@1yXahAbRSCwq-no=#775~fgh4$w%N6k@n7 zhe0r$T6;59c({>t(ce19Y1)0t83fw(BLsToTqtf7Wmz@g2faTydD)f2pC8UQX(gQ; zM)+a$u1Qw4@8kF0ejjtV0IKTmPhmIt2C9f2>G`bUhzMXd1-A^Gqz^O)56a-kb2MjTB_X%hKbY~D@9n-zlFs{nWX1uJe@A|U>ogRo!n z#I_0JVHyHBa#Ojg$A?ey^nt}FES0pg#Ms>IGl#hY1X+%nr>P6EGXjb7_<=$X?uK%s^%5g zgWfk(dU~*_`=;(3<`Gjueu`f2>q(jl4c}U@po3-qvZBXgia?M>J9piEY5fmOP)d_x zv>}qGPC=R^2DYi5%Aw02CWaVf|A+(Kpy%VrOCG-*6T)Xn1cjmUz=KMLlY-+=?WVdM z82Ql$vNYl!K62G0!NY%;074wJ@$jO;K!X$pJU#Ktg}v!@QGOtdwnz{eaG?-6>H5!v zws<4&!6DmC+;7`NZ`Gxv0)fY(;3T84&OrDN;V~#2gM9pbrG_AI=;#WfxtsGb2=yKw zO_BhHI=my!$on^q4{t!+a({Pxytr_ue9Ma``Q@NFnR;ua8HE9DFg!)5pf;xO&GGh< za5!p~4T1j|;n#Mz8)Xst z+q+8ZC28tI4I>f`VRd!j7FL824&$(w7e?)Eu} zqUpBZVmMww3^MvJ=jqleD;P$NaL(I%5@ib~(!s+`7~OVNnC)#~0&=GfF9bxaEkrL4M;^;3V4|fkpH6 z+kgioVC+Q08v9RQob~d-#10q!ewwdqbn&%%>)1aVcD?5hdWhsb#uSkmp5(3aVrg6h zrt}9nWh%W%*Q9BQRGY90R!dwwnAXxU?n->Tnfvf5e%J|og|Okd(Fd)9r#_Nz)J}%ne?fftGB}?qAq@NG!cp-ak~y7hZytTOP_z=@A|qq)d!mm!;emzv_5359<%*d%Q_$k#@O~? zTajs<<4Zx2%NZwseyHXdJ#r+COx#qiR`kG)vYIP82*m&^ATZ%7$|{PTx%U9)s(Np} z40?$Y|1f_pFo4J^%FF&9L5SLm3BmU5%M*^ql)E0mhb53vgZV$=9qIG%02>ebO)D>% zQO<&Mf!Qh?n18)`q1d`40|Jt#7haSDs29^;C|yNoV|2kpVaruafeZc_gDavZA@3YK z=2^Md_H)=GdL;aO(1^D(Ea8E6fkP|a;*n*&*WQjc)|el6-jV;^5Z4mb!@?KE6T?E{ zb|mpgx03XR<=eV_`C4L~@-o*^Ob5B*C?y=0={1j!%FTnPxz#br?V*HHkou}$PqLxO zSyy$wTm}252HbUmPyAfPYQCJ+-tkvaaZ3{R-KZ&1`WbvY6Pm&}&<7jriVEP6o`>3j z1$`e!ru5CI(%G24KM3X~X$+2<`LRf^im6Pl98(qw-X}jFXU&Iqn!I*FwddnIOK73g zuMxLp-h_c}1%do++ES7}vV7?@0KvlZzWvrS(<0q?hjV?ji~0^6D}?kjW>0Wcyd(Nw zRhI2+XWN-#jM=P6y4G0inPtL?s~^XD01OmPkYzNFdJjM3A9gnJ8~=s7Aktf5z$}XE zpPw&SWrZh>U94QsHE4VKE`I)apm(usac%L@A;c$NN5|JOed>*|(6f*O@lK`{7P?8O zn+30=nnME`dni7IWx<~I0O6W(i?6rI5NU@R*m+<|c8JIF{F&kRY#)Ei!WgCP`d+-Z zw?xNG$BAcxeTAj6!PmF%8QHKyXR+QUtnZCS1$iE)X4uMZ_?8+ogKI7#=p2}tmdO6r zFe|JsCp;WnM=bT3sfuj6y6#mo**=E8kGr{9YaXJXq?@9=@SqY5m(jE?XANHscnsKX zzxBq~AsIjou!Opele<#Ni2PvOw#oH!eZNX-iMpXC+7htbz;XU-QH?6sCgBYX{=5*ORXu-;M2k&5AC)z`7Ir5 z2itGRGn1*al9<|2f9N#8#bxH1&xW$n7ho!Qdw5YBmo-xN$Yvb;2z7ppHBEFNS?~3X zy}5ro$d~2(?H!sz)Xe~)ud}35;hzy3_mH0rlFR5ObJJeNBAFtS)&Y>rt`NGE^8T23 z9zB~_*P}jOi8H48`mCb~syy;RZv+JsLPB_oIrDdQiKMqm@|Vx?iJf`uzrjVw`9J`?X8|Z`fE+2aQoGMoPKW9C6_FRb^q+Oe2V;Wo^>3U zEhM&!r>-=N)g0!E+9Q^=$uHXDS~7`3Jg27~l2qiTSHyZ;D*g7HM%ta>az@TvMn1W* zz0XL}zl-qf*`)>~>Fu&9D#D;*ut zdM@^2aD8*=(PRlrH0WJqdY^$5In|tFGgez$l|}?p1BtA=^q7jkU~v_jjuH~m-abWb zARb8<3g%=yOaA@0$!wwf<%r!K@lM|8bc#y+)XP04v~zgi$#A4?6_a`g!#>Yt1sX%s zaZ;nFANuH3+_F>R-=$k8)nLVq7jUW3z`VT!jHl7mukjCPZC-Gy)VI0yaV?Z$nQ*Ry zx-Y>iKRptWln;=Eolmx7oA?8MxwM9siJl%Whv?M6?H&iueGD<1Yt~Ju7|Z7ApB;{4 ze`sy0@$R>|Z?h(A8e1P3YJd)2m7UQ>7fVp=t^>#L=7QIX--7j`-Ok! z{<`14cdfnl-tS)Dex7$f>w7;u4;xW)i=mwRS=?DUNwz@0nprQOuV}VjEe3}f&kTev zVjqE{My>@Ht0sK-RG8J~`*k4}UB0#?otzr@M5uCC^T388SrJqHkqp1e*|LMXj7h{$ zSQf_Bb{JUdjh>g>DLLni z72v(hQRI4h!tzrSQvYsP<)<^RC+nS07(?e@|JdJt26rar^{EpdL;vF=I|(9k6Lw5d z>XK||ydM65Z`?pc*Ov4AM@^B7mwV51AGf_?vY=Lr)VE$K&uaSRkmBug9ihYIjBZD? zcU_b^N}ao5KO>K_edAIx-Y;V>(x0RkXSbbRsu!B?c(jF#twsA{zdQf{?%a-^koLQo zz+2%1h_I3P;VA^FO}#QZHFE1v6{*Z=eE^62l$7>0?+o1D)d{ij9IcD`Tgmirtp& z#9ALW)x~(!<%CegLsnM^hiCD?7rdTGY|tQzsIwSNX?U~~g+5*QchjbM3pi@f<67u|Z0FIoJ2iB)!+U%wzizJ^=a5`|E!%VE$BT9%Nw0@q zygOyY+VtjC(IBk8wKERffc}IMp8SrsyoG`F-}F9BSdqFdT>Q%^@AOFp_j*Q~8vkE& zx2%4utoS}*jf=#Xb?qoT?8!(Gw?$%m-FIS3f=dkMB`+5ETx_FHc|@~NH0NhF{RlX| z;o0Aa-))r35xCH`=dlQX+#(HHHfn32|09knGi9I@s)naMImx~K(Btn%aSLOwXx^no zZmVD5_mB}aD=wv!ZAE?2`!37{{Es1}vU;a4>joGwU9`6N;OW4qaiawHViBb;8PO7&7h^E>Lqmuq1l~HgDXHyw(%QFG>w6_MoX* z<)R$(eFH3|rb)UD`cBwu^jPt(xJ8rK)n`bzTt7dLH^Fp+%15Pz>g@4w`bE@Ac$#& zbzuU`0aJqG9h~uTZRFy{sG3V$)U~Ns6W!{%F0AWg+A#JlodTyv+RmN}@YS1iD=8w8 zQr_tnT^5r^>$u%L81Skv>!Z%3`sYVGFBVj$1NTliT@U*%g!A&y8MY1jkWXAzDW)!a zkwaa?STP8*f1~tN^@a$J>LA zC%m1%naT2NTR9YVWi%t+oA}VI2~hV-!_)18kU4(|h4f4$)H&Wxw>j`d?tV zXcznZ;Fm%&76q$ZKUXx&{PPH;OjK8oS?7`?dIfc$3KIcRyi%v2m3M1<^iW!-i=L>z zCq^aP(hHP4o2kgwj@`KI4jI0-srD*Qx1o6~N16|V_$&KdvLH_S5)M!06Mw@>uholw zw1ilQFyQSFfhm~y)M~=Qt2&%krNG#L7->m^@G9~K>S;21pve@aoea*JFbl7v?N0Q| z^$yr^RfJisCd|hOWwX@gRo1s;$6eEF5Gmf|a?1q(N-FdlicswCH={nTqC1(9eEB{x z-C^3a&rx4*Kc!$N4Q7Yo+(@$(&s*&lG8`C9K@fxt%{l&Tdj4x2X_Xd1Z>QHV-m{qGtUD}0(pZ+Aa3V~TrWZZJfixJOTAh!K z$%#vJaE}=Ra;0C@TNDy`qV(H0nhXb} zCXaSPsG&Bl(F8n3Y}OZ3o*n<5Az|p{nPf7V^bKxMBR$5V?K=^+fI7aq+VI>9sG~sR zW8(Y}#m2Go+0^D#)G3#^d*F_V5$Ky3aK*HDkkaqs_a_V+iL<898jfbRd|JoaOlG8+ zWN=T$U?qrvUBZoCXuZb61F|uq)e>Afz6Tlgh{-U1gZ^AZsD@$d@gRhwHWY^)e)^ufUXs)7&k%ULYijH~efY4ygYn~xta%tq~)JVEw6rD6mJ zIFDaSk>lCAIKKq!uIMpGjOD;n(_Zo@DyMFDf=&pyTV?jkke#}A}>Q4sDaWn>wPxmq2q1S;)oo$@IZ>P-$uS)?KJ)T5RnB40#y))M)R#PxVH3%{eQ6o2V`zmYV=*$@UleUl!7EB zIkx|R`i`|6FK*9ce67L!`D_9s-g>6Y-DxeU55;=V-e&4z>-YBq6Q9 zN!_-ziQPm5I-{v&tH2vd6f7)yC1KuY?wz2&vuPkC!~+N%CIXrKoZY-wJZ16abgJIt z6cA5r}e*DE*j43OV$%Hon1G@Pozq=WLa0lhI@>&KYJ8?QF*qpse;`?x# zBZqYqJS+y`kh9=U>+S^6J+uQ9cRAgndcGkMx_)oy*z-0naa2?5%1qdp-CsLE8zX literal 0 HcmV?d00001 diff --git a/docs/images/kafka/schemaregistry/schemaregistry-crd-lifecycle.png b/docs/images/kafka/schemaregistry/schemaregistry-crd-lifecycle.png new file mode 100644 index 0000000000000000000000000000000000000000..4cfcae1192037ea1fd23c71c8018e1040efb61f6 GIT binary patch literal 59049 zcmb@tWmr^E+crFegbGNBba#V*G%6`A-6-AC&CuNlBaNVR=g=VC-Q7JjL-TI#`*@xo z-_P%Te<=Hy*=wz<*A?f4DJe){yd-)F0)a4Oq{Y92KnQRU2(cFp1^DE9Mg2eE4}#-Y zsZXHNVUj%%hyo-d{!!IU|1cfhO-_Tj=ZUXK{L>qqL$w-IR#rWWA_)r%i=x!smNI06 zR&}QeaalD{J^U$kHG(Bs0;i7zUnWbDIsGX@50mefpM_FW&A3PhI9nn(MIe|aNn z?0vPSNMFWD%J`M{(vl#<$LwkMqSP#~OEWNo%D`s%dq<17?>bC&IlF{rl^)qzqPaJ+ z8V)fjakn@m;eXJr{VcVg{*W^PgOM+dQ=M^nWVXdM4Tx)vj-V+mJ`Dao z#6*7=DV7S*$ zP}h(@S|O1OqBVJZ1I)Iq}DW)O};#V%*8$Gg7IumJ7qNX(sF}s0KRus|2xC zoN{Zkggy9p+^x!av`Y<4neS!t%d7sF&9?$MZlNx2{s05Gs#^5u*PMQp*^f&G{LI}g z-WNDIuF0tD@0LDSkE8=f!v6F%NVkok?>4jy3Vel3Y%&SEb!V}pbOryRT_Q^O=D43u zV#R!SoGjgD_bkiq#6vW6wSfdt!C%M-li=i%V7tSdjUxvNcwbK;JO z=bGtCDJX`RL7#Qd;Io?VKH!@-QG)tUbvC~&tkG!`PH1n@{&BY5*>yxz*1BB~(8u!H zpt^LM8Mm{)tei1cFpt_v2eQq z7J6|39>qj!sAmU00f$VG*57xj*_W|22=67mgI1OdE8Byeew2*ZHnZ##fxO?#>kLcz zBYUL-MlvHTeJ9h_fn1C_lXf24_d7`!+m*0l<@{{z%Yo}(eW|X8_-mqr+$A*)7lacr z;*h36IZ1jkd%lJcb8>W5ZUikm(c~gCT_F!}F?BAK>R+_ARzh53P zg~DwgS2GZI*|%rm9}p2qJ`2BLxy90P*$it~fMsIZC4qSz)?3`hZ}ac8yW{e()-JXE zn?34D$EM`LW{PIXHH&s0(Kk~Z4hjU3&j0#J&YSYmx+gQ(m>s0JFD^j;kWQo1d-NK~ zB_<-Oh&Ym+?tS$_OVPT>UB#@k-UwPkfjgP$2A+v2FBL*~BX;wr)aMXdyzVm`kgIB5 z?s-`J>nI#5$i;fx-{(3Uxo9)J(A`TfsB>YX?FFWw< zT{lVGH1?e(7g-NrXNVmg*7uKnS5KH<7avx74k^*0;%$kSE*es>ucZQY|FjTFyejNH z+hB!H;Tu8cF}gG*k3-ny_V zVVo0e#9`gH8>MlLjbk5}VNK%QOJs)LQxd-G`g)slZ0PUOe7PZA*AY-zQMa(jc5=Ct zoj1VR&%b_!AtOfkRzbR;pBGJjHa|&Y%Y{U!I_FYL#5Nx}cpuK01yh#VhE5Rleh_qk zxVVSO844df&g+CXwt6HzQ28-E>qhi(JI)yhv0H`&AqhDM&b54PVX=boG;O7xsT{9` z_{J%JyGks&hec9n7?S1SWvd~9C={7Osv}TrWWk4HOd}bWI)ei?)J?ac6rJ;iCarWIvgI215 zC+2tryTpm{09Q^kp|9|>R6?Nxq25a42zBs6E{0+*ZjwsjoxYx%6} z;F&Q{e0n`gH#V?+sFp^1uUa{mya8`L6HI8ZTTQV%O&L%|)H?e7M?@4#Lwr{=wg`Li zMbdhbZZ%|lJ*JI9sN>So|M$JDgIjt~1HYqIWNR2eBxi4jG`B9Dn7u0@>8p5OFwUs+ zgV?rA)m~-Ao`Q{E)p))+a~gZp;$LF``q-m}9jsCXdG^D-VhQ}HMHAw*ysr{@GxZJIj~DWs z2c>)XjNAmc@8xCBel8V5kc9yIfH77APhLNLd9rNJGw$8Ps$!ylTwHwcoH(2E11uj> z;ta%QxmTE|Ed1SmRaRZqO4;pJWx6!Pf*Db2fjC6`ws zLHReK3F5B!EQEN;bY2Sa3b2PG>~%Pe7-Iex-}JFlPT%2OvTW~CfXBv9Q6|z9AaqTn zL*1|}J|;GlQNk(i>Y_qmUp3m^GnzLP$b~qDX5!+nv$(T0%o7@c&6XtMFJr|(Z~mLe zp_ezp<^pC@l^IZ*uXVqJ(nsneQ3PcGjnQzJ())?g8)B4mo8b9)jtq7@q=jY0guVRN zME~7>C@|pV#%GP0cmm}6YD$6x$?1>j`G%YLyT4BK>0Exj{NZ!=Lx>~)?WfQ0t>+VB z!d}Ht3W;~5^%%va=WWc&aFcmlmaDLR1Es4#f6Yk&aol^-{(UNPVS=yPYPue{OnNbg z+W!q%;aZ8E;%a3nj}kK%nrQJrmlf7jhcG*NcNhsbjQm{*^23HfWra{ZcCZNejVBg< zc6l#9&g(44XadJOI>4jtZllx_#Aw2`x3&ZI|_%~E%91eXi;}+FnZ3> zonw8f#hX6|@ccRK@3~NnF*=>V_%R1{$|G!Y05^|#)L^v+(~t5;MZfTh1TMH95=lpy zHU?x}^ztG-ZnSf$sh%lZ%zRrR9~;&~Irh%=#JSjGwtZEkheoN@UUcrH(xG-|0`c{+ znV9^hxHTew|M$J@B`=H)j8?DX1xV!i42WvBPAO~V`#$mr`3);u4(=2I3T)ZupO;5O zp;)!{)zb_O{)mDiGF`$$wVu(1V3`oi}d*MSI~6SXO69_2I$?^Yp)Aq zB$}pQe!?LyIUl#*9g1k?=WeCryDoqPUN$0-k)*8J0}dTSZHh#*x`JGMaNJXQ8DpF5 z@FTw&)TqNA@$Y1r$9~lu8_;2ey`qTEqYmQ!K2>3G?)0b`i%2k!;MrK7EXJZBg?7|z zT`vA=GMsB_zKMH%nT40e=JXn6I^QQlY-c5g+S80u%zh_F43S331X113poagr3e}7> ziN~gc2C&m^{P&#^&(0P&lq=nqi4Mgv+_50zMR0QDktg2UyV4W|a%@1_3>jIZG^-oW z(mpqa5Q^={G!!rFK06Nl&Czo@oOR$~2XZXDLlz`sJ@|};A%BSmC|;_(@+%ISF5>hh zoQiuq!((T6MRt1g;+!r@@J`~%VMT3>KzrU8l+L$S#oF;4E&zH0k?%tQ^D7bV7qZab z*;|$fw2b=KQ7y@Pq%>1kh7n`uiKr>H;#0=c*LQ+~Q`V1as&GhvG!_*0N>n zQLOLSbNt@Mp8t%9pg_ThP-z?Ur#|Zg>UC8ctWg1QD*?eI8U&mZkl!C<8m&&fhv%B< zM+UI5yD7^@M4Hu)fOfF{nM@%7brcy#sj1X+Bm&%{L$P{%7GS&k3afB7I;m$XUrPsL<2Yge93aRB@aB< zjhpHKk;MgYz$2>fC^+S^zybnn?pP4ITSD74^5B>+zKkO!{Sapy2puX0(2w~3EhRud z2_WO0*~;xUK*?An232`9tGK|aB5QO+7C(TE0|Z02a4ZkND>M$EDT+f&tqlhtxcFes9*5&Od~iN|D)Nc|zN@>J zu##yz239;QcO7OC02*lqCCdX#KRhe-1M@bGUpVmLjWzJmyKq2qhfFPJHHzc9e~9`y zh(05VJlD9NhqDW4-kmQ>jZFX#8FYiUGQP-ZRKRWhl7Krwto~Uj9RJrs0ccLG zo=a*4__-@Gn`6g0Q3j+!?o2Yoiw7|Fd-0wAMtLB+H6(0~ zA)u@a7qkoEjt?=9C<92kOX)FZN55u7DEg%{vucyK|4=<}_!iymIb<7|YjFbI3vT4% zi4))OEX5j(O2=HXZCV5z5Ip{j61c=UFcAow3G6%%^jJrQ7Pzer&>Y z9vxC^LNH0gOAT5d(d*QD!=>9WkfE1D7;m&*p!-t8eZA38UcdXHCvCnfh49Lkl#j;G zzbOHYlRWTkJIMXB0E1W*SL%?2)l9MburXI6S*@`X9!Wiro|07_H|aN%Suz0^t@fygwLpAcq93TI6Xj`C(qfm21gaP>vNV8Ky?6ohK39%Q0$~K^CE6t zzk+(ml>2QHoJ{1=9Gll}IdY>XnvzL?=xDyCqhk;{tft?*|XoCo170pqoRYj*=Z5{Se!ywA@7z|olThr%!n5{7Sa1a?! zZGCr1?(2Tsa^TP?+wy=?QE_Rn0Ee;7C@1<0x;_TX7b2KQukEL zigLK;jR*@1OK0zbZPz%hER)k2ZL{Y_^9yo{_bh{ZsFbmt%`9mHcUN7FuS9%9Bl8jj zJG~9GiMk`nf{v5ex)yKF$G~lB8N7GZw>u-LA@)W%%1TN~whL$L)h9VIqlHt1e^)eU zU?$Aw#~gXvN|Vx+PbA`GJU zov@{X>$3><*i1-Xi>IX*4w={Wx4cXW@LTw=U+-uZeWVz1tLo7NYmeIiacJGnd6KgP zhBoD?C@5fdM-u%_2|LD#5j<|`*`r%=J5m$6+)OCcbq>ALF!1qPkcwoeX_SSO?v9r* zb77jS-W|6xd%qQQ*$@0HdLJ4a`zCQleet;Aj%+eA{A=ccDK{4v*U26|Ikljmj`vmD zXRNioKj;co=+)1*aSp|TL)P_M&LtNeUbQ@ITU~DLi#`dNh+o{8Wv-Pj$ikv_C+oR= zG+X}d(YIxGHJcQvUmuN3D;cDNl4pPCw)<yf2$NLc zPeaE2zle97XDfoTk2gEyx{g!6QSsUTTFz zlU5w&WfmcoDb~o?!x>#))K=iZ&mKhYZ@*oihLBG}$bIh7jbAROgolTp^bc@TSe;>G z$3fwbDt41P_kHwko>w}v*YK55XVM{^>bB8uGaIFzFh)HqHfC65(%qTtM1{=)Z_T6o z#1P*`To0M3jl4mB^KGk!>Jrzd0}}GG{btM6tgm;*GQVXyQ1(}1L>>eCxgEP-TQ4`E z*JS5#y7kt$o$cktRr(0W(<}Z<5q%ODH>SCTlY{ETdR>mxC0(lY-Cty~KH}wi-%&Wv z*o}g`*_!ckdLktPP-u^4OX~rHs`YMp3?}AuoG$*FD$$gykQ_<7;uSzG6`l35BN%$t z@8f1Y$uL=F@PtR-H+eBg$o{SU?wDFwnAB#zTKQpcSR_n}%u8u2k$}zpw9s5z--}*E z8)}H)V!aZ5fA2L}qC+4&ER2O;b}_(3-@S>#B9 zC%(nw#L(;H@vczwSj4*=uzAIT^=)FiHu#EnUJ~8oy%lK+ua!D1;Ce1Ea~&oqYxO49Azd}%ONIm9qAdcN$f z7*<&_8iw}J{H?>nS>&pdA@X4a{6!ROF(2{k;{JTAR@ij$gOG@2_3g0wqQv|0E0~12 z`7Yx@&8c34;Y4@T{tKYw+A)C`M8gSuH0QgcI)cB?#fT;?Y$uy#gl&mFWq-#c!G`zY z(tp2=V9#&B>51;b&B2{ok;p3Bcl}#gWgUOHT9(8jvC;j6a?poYBc}$DC(w1H^m*7K zxU3Mi2Rj^UHeGHLZ(Ygi+qT?q>r`8K8_{n*nS&UpT?_STKP}d+sIRo1?H9dsdBDJYS}#veVz=9TJXd7q zaO!=dwh~037Z^hT-u|)VmlC&5mY>0A+Dc2~Ddg+kwN0DI0|fS#3GreSEFw+Y>WWB* zQepu-;9AQJc@Oea^!5}MsWONt*aWGX9lFUBRwy#z+xl`8ih*C-_aDitJ898o& zi|BH@aZH!q;GX$YP(KPdAACQrJ+gG7ry7Hgi+l+*v2k7mXNYUML5YqYPw$TRUu^c^ zi~jZw!ORP7oMg?>CXD(*8h~=aj_-aG^f2k#MmI#l9)1R@3z>r?gj7r}Zu zX9EV=i^uiHJ~fhpr}aDKu(F!{^O4qDqMge__i(7}+#j<{wH4W=dj9&^29sCY!|pmY zq;KB_8>pA4kDbRGTWaQp{>{Xx_dcbCY5=(h~Jh7qn{;6)oD`%ahb6fDZfp|nQqu5k>Ihn6c z)o|;txFj5hlij44+Or<=KKwR?@PpcQ<3+5?5d z;+L>h4AO!ZaK5<6|Ag4S^B_%P*U4?U_2~PWM|Ek_rFTnJ@h1N4{Hy}pif-4+Lrh3W zJMmDZ?@2m?L9G6p^gcTf)RnI2KZe$8r;P2EtRo_^$q2)5{Ni7c+suK_b(EZ(?5-_| zo%rGvLALBnyoJ(D5Nj)xmi2L6_Cnyv>k=M-inV@)T~}bSzN4 za*C-Yr;^82$KK#{pv4sa ziqsj~AKbeXzkrG-W-D8o_Ti#G-7>(iddmyu@71ngkAkeE8UGUKf`YQ_^4)PWDmlCM zpYT3`{6WOo$N1Kh>tl!AEu8z1$Ki7yCv~#ic&>KZ->Ih`$kwvzWxdc) zQ(Z#DW0~a{<2V&DQ%-FA!6bbJ+a5xSzu*EB7@C^q@k}n6t+r`QsfpD>N=B%!1cRH- zVV;R1zFfF8+@Io&3$l9Zcjm!{to(;zcWoaviqgrJhD8?E1dxQOyM#wGK^yQD0ks(Q z5^ZlUlh#K?X$t&_R^(_h{_JW04y(Y8NN(*+f782K8|dlnx4%}M(*_Oc^EGP91I`Ci zDieE&v^w@+pYcB`wdTV2ZAhBe?TBi+{w3lUY#)ww4`-FRZ1k1m+v<-YtfD5`E!G?} zu|=rgw#r%ynXbtD8StgF1|U4%GSdYv*dLSYV{42Sk|TP1d&gN-%UW;Vq8NA{Sy;i& z{!uTK==73#ao+^lv;d?>P67-y3{Src|$vVf0y?li+Ei>16mu0H%&^-URy z)xub~xUU+?zoMDbafA>Aq8>MvIjLIf#GpMC&ls>{e036E)JeBTQGZefdj9g6too6S z$51_Q8kH5E9LrBG-=!;uJyfS}q{uzcUM;h`Bb2Al*RfW4-?DU#-n2n@|_ywROMlYJECa2Vv zIki@7YPI^1YQMKr7I>jErw%>qeCQuw>u7KWGjb+1Fbl2?xoom2ACa@%=1+R**o*$S zmfqXh3cG9KcZ~h8kDe%ncw$?B$(sgV3%VAh{p&OYZL+wj96N(Ax}&}Y-xd@~A#NIb z%n9H>JdFQ^x?e}_H%gItzkYJxFZe>2_TT2g{ zmZZu5u@`T5Ixx~|IFt$g*;3sr?R9=dYFXz`z0OvFiKDB@+vo5x&*f-RF2q_nB0T&n zZ}_O~V#5$j#=BOQLPOV`cDvQ!Y}_oBE)uMLWF9+*y#_y_jwvi-ewm*D64pA4r5V}J z%LK1YMvDE#JUeG+gmpXo*khQA)+)gKbtFwzU|NUdtX1{lYaV^pK;r*k0jS;oh)A8w z=<^7#{L!jfLB$|(5|)BJdB}BEIEGxsmVAoW_uPE+^O#*@g7pj+0;bN7=6UfDB}L=VH+rCmtxpJu&FGZECe zyE+weCCd`#1U^6Fx7^gws%}DCpe0B%Xdq)_ z2`Zoo<2atKHzuvOnz7aVKIg;mBcdpZ+D3|HeFk+YD{D! zP^E)4x*T+L2~^nUT)J@K1;Gl{d{*3V5h0iOJYzdR{|$v)e}3Hg(%F>8sdd}>YhC}bgXyCa zwbY_I41sPMhYEy)Zzp^rE>dRwRGlWnMjzS-%mUZhoCmP;xg>P;;~z%O)+ze*s9hHr z9p_wTYHf6Q=JDHWGVIwFddWOCu|uxv7lx&K+eJjnbpJ|5R6ermoYFY7SjR*DC<5Jq z(^J~9qj+mYtZE;+i2+hUXSStm`n-}b115pa?KyNAx2-J*^SwNeclQ9auacJy;ivcA z1wU;W4@w(w_4o^WI`x#M3#ySiNRKtN@~IR3(3<9`=b+o$%li`4bO@)qdPU&xQAFZ&>&XXqjj#N@YbfvXBknHObA%2HrEO!13VW1*YW% zI@!A>txy0H!L41_T~ZgNl%scm;S9rZwNOO-x4GE_b~#j{hxXOBju~|u?$&(%`A)|O z=MWg^v0-Zyvi9dI$2Z~i4*1b%HN>tqlegR{yr7*f(!dfP5j7xnqNX36qjNv?__nX= z1zc~&BqEUt=>i4HJ3#fXM$}{`%muBCFlxEEW!|DEJe55$a4ON>r5j*|FO}U}rJBlR z2E&)VHRk4;w7Bte93Ec~2)uWIc+iR5v6wc!*HdG>6y5j)I@p#^oheyskTH&wkJNG*af_HPct`%J_9{RNsUw$yubt*4L(XQd^;O?%oshpnW8E-fxi`&B! zdw#jR^lP0iRqL7P5BpIWA%-|v+()EDzh3RFv2US~rB2^|fMO6};KRn>=)H#}`ZFU+ zDV?{%W(OT;*$HtgCZRdmm;GezT1to^J8N=V?Yj zh~KvOB})+tbZF^HRXsE93qzH#=>17^3T3RSvT8Hm(RZCvz|I?ohjD334hACMGKR1L zNSGLsg9|(a{@L6ST7n{m+q3w{`7LerdgH$h6Sy(L;jSz0d6TazY#u$2i?XEXR%if# zL(&QZEb%&E-L^O+ae9_Mw!w-3kY;ouGg!@PRU#-TG^`+?sxdR*j@+&MV%Fe}$DKEX zgh0q-T^xUv!D<7?$k5X>(@e+gDC4>3kE}Lx@CT~0s0a1{kOV**kOgQQcYEM5z@9@| zTy-!2>7obv)lpl#;eyYGN>lROx0iX2c=b{13if={xH4A2I#nb_?|}9X4v)O!Hk;dw zQ2W>av^qNgJd_6937~XMJlvDk$9ytB?(#dKkz0g%tZz_TdjKlId-GAl4}P(}?Pz~; z^YZQn4SO@CU=B-r;p?em+eC*5zxjck@Nr`y4)s{-k1Mx>cnqXp5G8>b315noITkuH zFT&EecYUGTCkc(WsJv8H8OVo#P8ZBNNq~jRloUBvZ@c`LiQ9xg5WJC=%Tt93>mcO& z=f;2Qecy7dFFS6mvS-nEWdqaXc*veRwmVzNO{Io~XAO`QX}ogb{Q?ZRf_*!CrQ@Bi z^pda}k-htOsIKLJ%@u$*1Q8;%fTp_8Coyqdo5~oUuTr79%JObo}l*Q(I z|0U%PwG9LGSZ(x`$XCa0H})Xf(N!kAu`#St1c$<}@IT8>ut5%NU`1Jc9)v$gvzUf2 zeS?xlGEqj6+9(9&`v%&!@LXp|zv|I4!looLj^aV+s?URw+p00LHV+VIR}S17q=EEK zjpb?u(HwP^S9a>38@BHZf)LiKHpm{t1qwZBMOwuL1t5?x`Fw;DwB>$uhgNVK6 z`j5>>zLAS23+u7#m1dyWIYf&C=z-q1zQQ*rX7)Smp@l+(EV77GQoc{FMcqCzTO3*B zEbFW~uOOPpb(Fpw14JHD15V9yYXi!ysGvnm!v>RvmQk)KJJR|AMR&?wtksxx4?09B z6bjRSYrkTZ{11|V0!BU|@+Bym#!qs2ZlS9=ut)*zdq$w0Ss3U2s!e3z$b;&F>oVi} zjhy6k+}kx99hOZT4g!;#(h7-90!_)sp8n7~>jCSw&%7#+Tq+dQZiMv%@ohax$EMnt zv>xS_dfzNG05{g7?QsCiDTAu*Af-t1%46VtLud)|5KoONoEL;1`>HOq!HZl+sOwP_ z##OV>m zR~Lk(vme3s$e8ysnpxjqMT<^5V_lzzT)qB%S93p_76LFaNPV8NVP3e(jxOLPwS6d1 z&Rwm08{w00@;(!3m3sh9PXXcQ-VGlswX0UDsCig5#8(KtRB+}@rPD|6^bu~xVs*8n zle=Y8_fL7T2-d+g1rUoP@|F&WWgvub&#B)cR$-Q9sH=hU&JMw?p&p_GX6dXN`@D7H zwX#3&g+aEJhguNq#~ctD(n%rhs;%|uip0H!R`!;OUgpG^8a!FY3NU+odS+^1G>HWG z7coU;cpCto!Loj;g3}m^0|ArTBZ&9rd2FUwAJH8zCI)~t9{L3rk^U_m{-FmPSNLP1 z^Rp@cFI04S%yjL{WB`=8Cc_<<|`VrxYgSwlJt;4hT4Y$c^8kg?XU4d_jmo+9sW zAFBzqUYS3w3`P0#1`AJz!szdwr{c@LYs{{?{5fZdMu1~ENSfQN}dEm1(L zRmdJl@9g*k`{ybL6#;|a=@~f-6cAabSxp7@Z`tG1wxK`-MqIsw?}Lz~#eiMUKT`NH z0U<&In<4@jI6minZtsRXSpILO#yO+$^mR} z3B0B8!vT~bf)4n>|HSr~k>|-@+JJd^EA}tsW?InAM57D$6^24!6Sn%#p_wdfxG zlf?yoMjI&)L7t#?H~D#x6t%D;!ML;G3*Slm(WE;g(K-4NAWnM*Gh5LjgrE&t<|>Yi zeEKv<*p?jdhB_#v&)ZxML2*^n65U@%aN3%+SfPL*;M(OyFdi@wjJ)e;|2h#TS2i5ep zjIFk81Zncb9G+4So8oqI;QWWNqQiu5=yLuUqid3bO&$o@udQe>&e0LM_o=vd7#i%4 zmgpC^oT(ZAgNZ^)_Lb|y`V|m3>`=gni>~U|)&xsSDMJ)#W&dIi_avwakfb`%W`0OL z&y*AmZ;WeNPp$P5A<;?|Y5Tm9HFP;dHfpH|01)gUNFS3B%(rCRic^u(XgOH5MV&lf zezzLH?H5N>qc}N0UfrRQTWx_eiCt`DC|A&hTawKG#T-AEeWR0 zy+nV8r-`7!p6BNTGEc=J>#$58ld{)e033r-&vlj-F!XjxA&}37dSdgY0#HrCOaxcJ zW*Q^@fTus(!xdXAHBe}kmexxT+WHc1q!mT|=4-4dcx&n2gCAH8N6p&wOqBu5)VS!u z>CDq~WDbRa)wuij=egPXa|T{Jk0L zQ%8jC96yD0qSZ+5{rpT;?anO8s5g+i9{*(^qk)rI8CZv{!y@!^hD2oBhylU{qRB_9 zBFDMQt!9+1=N#T?X^DvkhPtu_21#U>b70li)5oTIzHV=KH=ikG8x|-co3DwGIGA?E z##QMs<45&h`_EkV!21u7ul58-b|aU@Pl{ zc^Dg$1_lN~=52ThDCu|JIXH61rKh7;@e{LZPm14NT_%p!*$_wEURqIom^A>aeK`8m zfd_<@Zv5c)od!=m4ns?^F~m`4fzX{T{BLlW96&SLLLGT~5-Z^cCkCu<-^Si5mzDUS zym*n5B_?^$qpTF0BQCZK5QI{L_V9TXkyjtHN{yv$1KIKxs)EEQI%#oyJWO~TK zzuu7PjRCV>MP9O&8;`F6Ts)fGE)PIZ82rC~+H{bVG@t8189o z{Hn9_eUNxt<_G&XAy{_ZF9_au=((onDC8v2SY0gCnxs*|)CXU@kkG8LWY3km!LPC8 z`LxkZnz*~YNg|~^KhG3#2ap@7U1M3WeRg(MZav45fm*%I*8;hgy*loLRM81j{2a{)!o>WhtepbjauGfyVRISGayG!W7&ko=3IP93eY zvs!eXnfv?kM`Bo`SDk4TIxZcH)X}{Cmqmmnn{}VvQOeHl?u_-R0rpmso*43;V-FbiVG?~U5~rZ zwgzNv;t7JzhXt_~>g;+Nk6UyESlKT{E+dniMvO^C^9%*VnM6SL{u`p2y7^=!OY&^5ym$lDgoBOE+NBKV`-rwRzw*E>a5pPPE z-d^lMH=+f<*boRl-5#*tUrZ=&UCj`dN&GeQ&g6{EwP%)S60H7RA4^T?DIhMWsw$fCD|As&fa>DQ)4xw z{?P*)8{2%Tq2{)ZCkzU{nlobp4X031Q%{-C@=_?6f{b74=B~5XZ{qMwH5>Qg@MsfM zwA}6&C7$$jb^_iE{3H ?o2DGVs>vU>ZY1-<{UBe(NpIev&aW>}tMw)&JE`=m1Mw z`L_;L_XBf%$uqNb=cGbDCZ?C|?N1r|z*P{F!ejL*iJ||Xj9ZrxHaz;WsR<9>cFymq z8-?7&2qPpU2I3C&#r_l?F&~+#)I@MFN{;;o!+Se205?%NRZ(+yl`a;5!ZS5hM%6?P zZn~-VZD{tK$yP{Ll1p=|7YLqCR-H>}v=OFqSph!@Ki;2@jK`Q<+zoP%01ODa!c^H& zxQ=`P<1MOj##YfPgY4ZM9$M2j#P{0<;UXmMo!YhBNJf>r9?ic9TPFtwM2zdUWqO2j z5m4~$US2E|K?j(m3%J6HO-bIeY6rrW8Z-T6-wjlXxj#4uGj}Z!YWqPT>tt>sW-&!^ zzICw_EeY1JLut)*d6}^<1fuXH8j(W-!=Y-DB6=#>uc|LmasD`-;F){2e>tn!Vt!Q%P zGe5F65UtQ0V0v)r9^S~UhqU{ql&oZIxk&q7X;pab?*sF>Eud4oZLAiak7{}MGa z3NWh*3`D<1jhbMH2sVNQT}ZmiT4C*#CXGMNjQtugYi*ad^Hrp#|32ZvPk!XbHKtkb zJX1g5)QEQZlTqah(!MdPo=5_Nvz5og<&4^oG&%V3LLC<6_#PKEJek|9uuT}-LCfq3 zfIHcDVDROSwaDEqgf}+3l&G9M|03f36X0hBJnk0oaane?0W=5mn0uXDAVu9~{H||? zeq98m2SJg=`j`c9(ccC7@^$JpXsfFFCeX*MZuWKT*YfF&e5^86-Y>lZ(!hM@bEBTK z$%-oQLt?29fIujqw%XF3&CkC@!$=IUd&GyY8ggfVA03>-BDo8g*4d7vInpDYOm6is zljGvx;K%dK@hrhU>M{6Py7;{8n^c}Tg@FNSy+>iYWue2BR)eipAjl`K4h$&FmHbGo zIS7RUlZMbn@k?3H>9$sYM}uUk{~=vzF`c&io<`{f;xdkFQ5e;kgT0)~m9lesk?y5- zqf2ax_DFAU_i{5=WsG6yE#Ol zOlbbRa3AeU-z6+0q#*xyV2jfvAx)cpra8O~k|qdNUD{yl8JsTGQd~lz=j%_`SJ23L z^1l0;WnmiL=$9c4{FnAkOy$47c04n9PWJwN;;yw3j7;VROGyR?@9qZYE9WF>^FbCC z`*Gc|tr9G9hzj^D#$}c$H1tL4k@RyfwoF#9PaiBx9{2Lt+X>AkFAY#{ltutJ^JcLn zN8-9Z#^{{!D$b8OsI1arg;Bj`1(lja^YKb%RB^FVVDR);c-6p>E!QX1FONk74ONlL z^0|0G_NcOMn`%sCAigA!n3)NFeYqc0cw`5dXy+EBPEu7>b5&qeOgIk;6v-&>gVt@u z@#_o6G-t9z$hNIMxVB4C@*@CTIjz|UWZA{T8ASG(ZbX#sV3{B*UHDnNsH^N zHFNqcHroA8Kh(ga$u`Xt0L!AL;f2z9s4SLv4VjXLV5WDb@Z41ZwbtRHJ8}rtPJY?e zAAx!?$08UC4;gGe+GIQocH!|2$TPbp35bB*oF2=@4(DmL&!^93kM!{9C!+C;v>{N zt_}-^%^(x&h1_~>YeD^Q!#$ln{A>he+e)De>YAb!Myk@$?&Z0ucA)Bk_p7dYGs}H5 zp9oB7>IB!UH^>GR?co%<>PG^8`#`(3ljN$>qTT-mbT2n`lGPq? z-=s#8Jtl%?V`Jki7OmtYb;Dgs=tvhwS8C%Ay>V0n2T7Hrq7#@x#3%O`@BTW z4w;Q4=2KYWpqEj|NRYMO+>(K%LXJ1NUFoA2m*gi~_Z`11{ykt#aGB~I5aAV@@V5=b ztjA=;5G&5L;HIv$5;BUd0j>bBaGuWuu|}sJOlxF>eXjd0-XBUK-2j%5)FCnbj8Z^@ z2Rd7VdhGsk9Y8jm4OBJ!{6XRAizxIaFJBJs=XsHnc-?}*7Sw!oXqEbQ^6{cVLdd^= z)84H)Iyz5D&e2?k*+kRvgY_hji_{Za$6Ysjv7PA~NLHA#S=QO4y2B0oP1qeW?u}xE zb2|+JHHK{=%z9fS9`#U@o@mn89DS1q5q^HHFzDhVT{i1?HhA9*fPw^4!e1QU{5|4g zjjT;d3`#=)sTDPtOGf`q)LGp$p%D@y!4kda*z7U_LfyWEnEo5_rY+5dRJ#_f2R+Ff z2Rv8}0HER?udMnXEC8K+N*0h=Mt`^eK;E_VO%)+|Z0#w`KIf?gl;V!;dw@VP{=1p< zEsl^Yyc6>-HS){eU(_{x5V2g}DOK@QIs5`k!&}+Cx$Si|RZ-itB z=32gTM}+}1mjf^{Sv&`THQ04%6}m40hz0dFkh%VO--uR)+#r+T^Kknc`1eZ`?6U{2 z%w>mkNM~gfuM7;?i#}Z78k6e9>ylB=w0?P*rh!*r5pp3rz#FP91X;oN*Y=nHCpu~j`7g+*sB75~DMe#& z4Fek$odr6%-GHj$@Sr(s^_iwG#$M{`BHM#EuVj-BmAIt zxK}cP?ssoL7jvAb*KA}kMnJi%9c5{Y08vE+Z58rWJX*e7*%OdgZkYf^__+qq&bo8FHfHhP}& zF+1WIZuOf}3K;IximsSWTMsLzcW26or%MJt|0rMVG*!>JNT44wAEyC&H*J3*lG#4q zJ)L^fHCv^Q34MV;lP?qHAVE>2-FZeYPrOb(mDduFMdcVr%T68up4Q6fYZ*b5Zv(JI z+_d$$(t#OJ#1AP6i2@<2P4C+aWq;P0N|*gfnWZ|m0nk=hqjlK9_}>AG^~b9PyP}%= zrluyQm8U6)TXp&(u13ED8rmM?JB9c}zJuR`w-U1CuaZyxwB-GM+u{&MD z86WRa)O(wkajT5J3&^a!X6CB1taZE487>*+U*YLWIV@ZktW5^sQPDK?wI%>qaHyM> zAjH{%Fnxhe>kc+mM3BO36AqNf3NI?cpaUKr-@j+~Gv#ao7`lSSu}qJBMjlpnmAy(j zJ9!+7r)dDpCAj#1*m?`UD5L&cbSNnS0SReIX=#v_mTsgwrMp28P`bNAx_c<;?q=w& z0VIazJihPmoOAEz-hbeU9pAn7T5GRuM&1q_Y43XiepgfH3@BNu+LGU|vOKm>O)V_S z9yS5F=I6%9w{FI&wmhhL6K#nFu<2*`P*tkg^6CC1a0~jOq-%6Pp!v5c9@6{guASRV zq2RE82c*!bRrT;-SVFv&@O!Op7z3L15)QbkXpS6Ox zptpcqrm;)OsivwqFpMslFZq6Oj4M9}r0^4bt2Ym(*^vJZ5oa=6_TTQ01h)63XcTWn zldxOVA-hV%?`TF6nn`@9W-i!klVe-t&PG0UgmspOj&QbptBY@?0V%*A4gUg{H;g2- z`p_w#{iS?ww!8aT@|PzK!B&B{s~e#!wn z*sFV3bl1_GHz^LxzGEq>x!F?Sq8G_}mWAqCEVOn`2xe1h70TMNSUSH((<=1olUzkh zwm@pt{G|7l<*z?M6MZ2Vca~F>00E6xT6`DtMtQ??Juwo?eP-86lr)*&~zz$KPs0g+6O#XM-h>ceZcPeqt!Gu7d)#X zM50$>G?_)OwDbVdQirw}f9N~34xl|B06`G)ITd&<#RN#)(NA4d?%{Ay)M2q}C--clqR6k zdH>aywu+T~Y7nbX2gdU=zkz)F*{V)X;|v}?JSl(3X0|`tZRZ#6(6SEe!?EIzWoe{4BJ`?f;+M+ukvPmSnb zXrCvtj8M>qyGB^6)Tr!^>mN(itx|-ZW-UG6wXEQ}rzV{(H`KI(_2zdM%GpX*c6Kb6 z4-T}-byI^fNc6^tW=n9(^*P<3qf(}(AMbApo4t?uem8rE*V{LmXhMAJbO0f&c>8hn z8xRoDsI&PBGopLBq2+cvTKDbSgJM0MyxqDQ(;*@f;jHi|QeuE*9ha8j3&0Xu42X%@ zn67Xz@NXxrwE$-HgvJ!WFtogqirOhYb?gao?cM6rnSDC1&4gBh$CU}j+n(>Ut=ob1 zC4D>1#%m+lz-cZ#1aWp~ALoG1n3V(d>7GxR`7CUL&q+8Fr*Acuj?Uq75f9efn}94o zbm1g2Mqjd7e9aXUR3E$RYXQo5>yE%1NSmS1MY_fE0FlrI&iu$ql#DdiOGfVWwsMy1 zx9cHw4y_(jULj+dmr?Mq_(K1l(aKwwG??%mNEWT_rY(3w4|BioH+~yZ5_M}%ZC-|#K{T|t*Dq!KoI`N1LGKfz(qRPdZ6EzhMK0>8{{Ho98_pHB+?Kr_n zGmtFauvkPb_<_Z{2Q`R9+d0iO3CklavQzkv(7Z5!loYaMWf{BN*E8$C38VkP(;Qqq zN&7?({he*fPdxk6bu690Xr`!wpm)tr z&>ffil;*oa3-jUB3(74bW8?1$ydw#1B-B*GG--mIY6|fq>iHL8VQ+f-=hZ}=DciVg zLQ1foe&W_fGHXSS!cO@sqf#q0S#jjk=QRfAzntr8FS7waPzAMfmXQ^eeY%NElhM}P z;>G)>ZLSKyj>?Pew=t@ac9a$5^in0rHje5}*isg#o8dsSv1mKcS9N=zsCwO*V(KXX z+m?fZC2fZpemn}&ZbqzAoK%#|+M&k=0?^W$@33+X|l?Y5C<(>Iy{QeV@5i);4Ft3>WgDRS#DICaLFmn)*b_`N=- z^;7_W@-qZ&B}_Plk;@tf0zkzlhNmlh#iuk#en++)fR z_psN6M-jPX10FBoY|ZSmxd^SNsz2eA4R`eKONG{%-ExRfd<1c%Q^RAoa~w65xXyXJ z;&1wVU535WiWVkanF&S1KFa^>RQWV_@H3M1`}c4m9-4t{t1bnV|1<*PXmJ_q(IsVw zn*s#}PBBrS=W1+=KSoGIka_P2xOAQ`sgfNIytQ^!6X`mZD-oZYZ*FNhSiFkMf0C?` z9-;8A$cDEkfke1)SfjZUpnD!tYbAN!q#z0&=21s<8nzKLnBc+58=51SW(ow4Z|4-g z;zV)l(1mvc^0Hb0oHXnBY}?jm-Ch;_$z5 zi*Ik23L=$5<342|*8AIso|AqD#?pm)*VlXFf{xO&m&th5OSpZnCf5|zzjk7WO?c{V zkL9Ro+!(Dw4Yh0Oaodr%d(w`u6{&d`KR2q^NZ6-aMk8ip>!z!x=W66Nnrb7K>TjI$ zf^oU7mv$rH#*$IgK8!{V6lLIM5VXl+F$ zuy%z>xIhA^Pp``IN<&ZCLAaogzv{hi>(r4^22q5VaL~JL@@>i(mGhTq0wQ+!(%eD% z0jAiZ>tOf~qoR99^kGwhsM|Rg49B1gfAv-?O~|(7@01|4v9ZzIWxm#^$2Q1dqLg_f zA4?>+0_3AYkkFKIS3hN)-(es@D@pk;Q9A}%M|WIljcrf(_?)sx$5>ZKyzdU?%4BI? z9qNr6=!oBYVv{lhCzu+?%}m=^J;n&R0qD~E7PXiW6oVo1oTX|yU&UMPeZ1GKajGD) zaQ312-Cv1=p}YmSg4Ck^XvbFL73lY+ANprQQo~s8B!e$FJS|YO-XFsE}=o$lcPg4Q5 zi&*Fm=VR}SfaE|N-~(EKyB!`I1R~;I4IfE&=^)18OsXI!$)|*CtXancX$_s&4&o3j zVD%<`UUAe5mxWZT2KmKe+E&@t`v8wjZ7VFifjdAoWDU><;UgSiZU_u&jKy_=IRH<9 zOuTDs00B^y+fmZx@0j7FdVPPleDE;54=3d*iS^zJff8Mj2N`xiB68NzD1-8`Xk16T ztFeN%wHt69qnSfBqL}Eli>_(np2$HHcrU1M6vqHV%a7Z)n#uW<0M+B4izmE#DPbL_ zni1-h4KA|6;iOYN!ol!I;`9NL^1v-3W7wxsVs=jQ{$7)oU$Ez(ac?Om(FTW)7@$yK zb#a4E;Re|nl|gbK>z34yeXC<4UrA9t{<#`;3YMaTae>EL*9&?13S%sb=*g=!VT^1;Lz|9Hb%X5o?R;3!^-uGsme#z3cBqws01j=Dxf7t`}6YkJ+&bTE5Fo z9-}iBuKj*?GWT@}uw9ozz!O7Etp0{~}jh$iv_C39^|9 zy4dDfS?>c30ex!Gjt9-OJkA2oTy8}f#iz9|`LVmKI&k&OK_3XaxImHwigRFZVL)c# z$(GWVhXyskC7N0{O`+K-RQ0 z*q`iXzh5c0R{C0p+&xkVN=x=lhzzI}m0Y|y2gJ64zPV6jgKPgMG$1MR{^^Nqtf+)x0iwN68lWnC zg9I7^BVQ|Q*h)A;KsLys!;E`6bC&u*OP}^%+#pexD<1_E|5K09Ydc7rs;VHTqHI-c zYDj_54+RHeH6Mo6&!A47weP@dcBfMwn6Mgf=a$C4I5vfcy_4=rL3~XC#gU?>vk=l( z5SSbRsesy$C`bczi)mIo4G_-NF6grq5eVT#j$+Vj$Qnp$nBjUS2$I`1+;DX4dJ@{s zK(~lp8(X*eVtuY_=o_jdAM`uy;SJ?4aEczYTwoM~DOjKiCv>=hzV?hgU<9w*e}S@( zz{#49=|xrJ1vahX)4?>V7}xgo*F3K<)pU#iqF4Q(h6z=T3JxI*+>87KqF8pCbCuu} zhb-We7y2q*xU|1^gEHy+k!UdWv#0QV^Ms2@p~MD`%_Y!3%QDqSvZH?_715LZ3NSt- zo6N}G7@+E992cg*0aB&~d(C|J#rmo@0D>fr(l+1_pV6P;YS60y5$UtZ|0HK6jnB6T zfAh$lGt}XpYTlH*vqiY{nGUP9qIsbvgU#S;Z+m;|$V1i^6pbtCJsiTDj{(i$A|r#k z-ysaLlu&l|m##P&xBbESHHjqurVky}(DEehQ2nH|d3I&0L#K ze2N38m@5A;f*tFSnEwwm?I8_F6>A5*fu;opV_yJ$LYg(9gy)VIx5X2UP`BeUSw z4m<97r-ORX{g|>ETX&oxH{+IgG5t>|(p@aTM*?8Z%pxWVd8Vkp%y~UMO6Ks}yn4N5 zWzqH&fNwLG(phGdof;oMjCjQ%79@y6lFL>3a2)NMIJn<^0Q6*IeVmVP{&g=(2})B$ zw?(ojRld_wty?C|#N<{=t^t#$sah6jgMXw`j5*o*6m-rF!gjh7{J;uwDqg@Jv7kJu zMN$YhfInKR%b2<-kYBN#WzL{610pq90*8AASqXns+|Xt-2huR~(G-~?r#56+vAQG$ z;8?`fbIjx9s1P|C2#R2%2cYT>nmSN3`-5Ot;*0{f;k!AlJqup?LI-Vpfy(mr({lo z2TJUG9C5nSFQnt*5dInJe zVqGFz`huWE2Gr(25i>Z~TBIJkWnr3*->Y&oGYYlsrtt^>D3-wNFPY{Rb|?Jv3Gnr1 z0J$sfS?^?HM7{z_sR&=2Xr7!I(eM=RY3Ua8|4ed^*nqGW&ScJ}oxI~4abEV#pf#1fhmSbGg$>JoCG3)d7 zf#0+2vb!6P=dQN<7n%eJy7nCT&EDy}D^&jy7xAvx{gaiSYEhlviK!yBF%bnIw$VVH zmg98qyxLxyq=v)?k?#1IyVL#x^IO%j;GDz5y~~y5Ao}=+@}Po=>fTevd2=-PtA2Ho zd`v(MD)2+Qm0LQXa)ux=@fVB{G9~x8yk>AbEwIF~zNwB{WZn|$@L=#OIMr?=kTV2= zU3vb|E7P?x%_pr?pbEwusf5!O`j&7=uqmqyMP^2?nioY>VE`nIT8be{qpBV4)JM^= zic$ohF7SvV21qPw-|0Y=DWSZIh^xX>td=*RGyI2w?NgA(1VAYFn3JK_U7?>QTW{U+ zF1akD&t#!YFv1mCr{Ic=bSBly#*)n4@2{}^{_p?cz>OT}747fpM(pD5mGXc1mz$hH zZD;_&RmBuRhZ?0#J1XiL596ykwV|+Y^%p=2j(IKooq!`y^>=6i3Lv}2h}lAETmb%P zQVx5LLFIttqiIt;>J16_q$#lYAZ)XEHGE?!!2r9O6>6dq1q{h3IL?37ri{MDng0%q zbK9HXTC{UpiTNhB40rc4VvCw6a4zw$lX)$qh#J;65-I)Phh9182@W)VLw(yQ9y^bd zR)onJSTv%yNaJsT#tY9}Tc-_;ickq%{)hxvs@K!!=D;G=S*>19L|AN2Tgm_|l*du! zwiNv%Hcjge!A?%8r9|1AvKrF!ANVhPU9tKSm2A)*RaD3P$9cMvM1qlr%##uj0YvA< zTyIELnczg55kKMNl*|mM5+MT#F{1nFZo{SQ_pW3sxQch=A$}-uB=TmN!!%m zEkEmbdNvuiFf9xm!{4V8x!nDn6_@(R8LX!e`<8yCn}(*Egcb-6{mTyoSKc&CP2%93^TB^P2><0J*5ko-{2NJeL;6{b2VjC&gb_^O z1k^~xVYDFBuJFcLYE_mQxIve8Fu;WAQLm~g4q3KazT+&Y%&JYEny}l_R_1babc{_a znu`nzMx@~B)>He@I)Y!=48|@hQ@~cx%$~hH#4+ccH)>!-+QK4(VXD~l+JFK!czYqnW`6+=gzfq zN|()m6W;_azyae{1PjB{VIoUVN~gcCVJ7fLYm^7*-K4_|S^ObC-LZum&N)5&i^7x* zBfKYFl%At~TuQAj`g9yHrz(%&;0d&Y^)TCG-FV(g4Ue!=^9}ZA0-dLj#xgs{QM1HS zSpx=*|DYuO)P8N`^S0^6P7~bh| zQYyJ0{D(JVLMj z)0&N?amxG)OPRZS*3WAqgO&Y}bO)3D(p{xR+nwMU`5Dtw!49LIw1MxR@ee1>4vz~e zH}~3UXDlik`<$pZWD((guk17-vhYV&GDSNqb)_4NTp%(=A)RNsr94%irUy*mu!MWj z?wPxKF0@vH9%>hSD5p-+z%Mr9>#4(SJ-bqFg$}B~=48g5+!w=I&av4W;ncGg#XzWE zAN#Y+42;uSeu=`O7mKqm8~LS`F!y!pZ${XrGPJ0O5^4z|G#|9hY8t=WG-&9!+pJP- zz{`l(pf)h1iYuWqG(0<2X;(a~{c6rbOx*ac{oXW{nj)Pkkot1me|#j}1t)GaJ(XvO z+8KH_plAvegslN_Hyz&Ng0PH)1UPNM%-4l4IA3bE^)L=SdfM ze>1WS#&cRfp0ZMM{gWbSR6bEEz{v2wy#R_Joc1V}lLR0geXuT`)L%)6#~)qHC%))2 z)b0Jdf(%r?62><@7P_L{W;)mP_4Q=;&rduIg}XS)P56lRs*{y#YG22P(jy8K0k`Z> zS|~pJm5y7l&9{7|-_pJZ7qfZ>9iDZ%nRT{_;!08!hOZ`OETS&AU}S!?L5D)hw{I7iX$-QQPt`26LPbjN=$ z(w%kvK)PfrZTX4r`NJQ3$~r^JhSfJ=>V-Y-HAx z`_H#QPR$0(3+m=}?qGbK!BrOOpahQh{E5bnp(NCD*cDlEanq`o2do!+8iDwBrR6(e zt|5(3MzPFs+@I%d){73BG2K_}O%JPXxLo5P=u2PIgkZJk8x{yS9Lr7O}S(qtk#woTxC$YBrU;(Er&NgNlbLLrlj_Jkc^{piNby= z#iJe^GVW$zEd%VjsIQ`hFfk)TAV+beD9?;?zrqoe&%5$Qyt6ErmTnrcx|&{K@W;uJ$9!%?Jn-^`~~Dom5? z+`LJ`{+NM6o~YhZWh5)6MJlt+5-w8immUlOpHQuK;D*l;6#FXMwKriSyt-OlXn&yp zRV_mAN^P?%xj|GYgnY=+0A1~5prrJlYOY?6^_>3Lre`8i6zq8w{37}raF;JMOx5DEM;&~o(CC!MAIwbPuvv9M(v?Je zTCH@UA^w=u6 z5;Ct))dz%CZH8bFyA@$72&xSVQbwyJza=VLFACU!j?!nLZ|R;~QRJ9|^$sPM7kbOL zWYW5gn;2Kebx+&`bb;!SLs$@@jw6$AJfC-I?>gwvay1gZ^4p{HAu&E?_7ocmtv|>iNM;=OyINOXAKiGBS?VWcRr5=3n`?TXbrl ztnIP5f65eW8B-{<1OiJk?&5gFR9aEsYwm5)0t*M!;fs)e0AaO%|I)(YV4on z1vbE3-Ynu354eP6MauCMc9-G{jIatKSo>@ zN=p-Y3GE^O*uGADuTiQHNLHrFz#nx0Lxk)Jrtc?=eCxM}vlwW$_`P4yIhcQy}n)t#)5`Z%_XUh z0^y9xW)6q{mVC#d@{3D~VqlDS3Mgb+K*JzMsk*jqY?7MJfK6~NM#e+?c-`7jCWWTk+S;r5*UMxUYnUY_F?|l zFH_QGF(uyA02TR7SULn6ttDMLM@YZ%UE_%W0j?N!SFeIjNi zK1foMzY*S{x9v>y9;0C3%w7QsZn7t@$kC@V{1`AHkaq7YA<^f460WpR@gP?4c71wN zB|l&llDcJ?`i1_DEqB#6TFc6vgXGV0ttalU?_m=OlFkhHQJw?YkodZVwa;B@O*RpU zx+mq>=PZvNpp)C~A#?n6{kDwb_Wm|97&0gs4rQOjV6YuQzZ94upFl_bETK2V%jAN` z9U&?tPs+%LhlT3u23+Jb)ddC4s7HL;>xoXut&DL!r0&j6q57SOY1?cdI?!US0I%=N z-YaABq!bS#`cw6~33)V59ppwrI^+Tx(fo8gwVu5mYIpgW#t(YJ#KFjTlj$h3+gJB1 zfDg)UMYqwjSh$5z2{V=5SvbA+yqVh}r%v+Xmwfudk`r}aMTd1lyT_CDrCqg@VIhQ0Pij$gUsc(;0o(xFg>4 zW{&4N21gd10pnRqQNo2gYR+C&#EC~gAcIu_(W;JE?szsPTu(WR-@bfK_vECUV=#Rp zGgH0!Tp^VQ>9;4<{6#p_`N#;FT!EoSK{yxxJxnPUJzo!K0V5H7w;6m+*wnwC=OGYa zqCX`0zSaDP|dQh@)qOt4i8mhtisrBdCyz>Ev3-PZtC`ODikw;;4eg zZXUp}orL)eKE**n2r?Dg%D_?65KhZVwSLA?VuN!w0Ym&PRuO9SOvn9TCV1JLw>MaS zk>TSIShbVV1$&xZ3(@!*2@RhPeEg8zf_pG@>-$h-zu1vU)%cEvnfa8Wv_j8H-=2SC zRFIzDN{dHjU#!O=b9)3=P+h~SHy3Mbl=K+xwGl>o{_!hoZt zE#U&f56G#XNWO?ZFhG~XAxnp$W6(!qjz4vH0jkP>NwsfO1c^Y|c(wh{@5vK?T*#rc zgu>|;%bNiIfyWA7#*mkLW`4^ho)UzmSJH>FO)acVmL6G6%MZz+<7itXW(Z&5(7x@(xt?CS`1vIMvI z8U;GrbZ;LsWqfG^&Oqm(!T$TwYTu6bA!q;ZL1!+jx~=<;$bikn{+#=)h0zfAo#t%B ziT&7F%cYrWAn=t(aFPE3dIdACd4J^Z8!S{>~kpwO;n*{BUn2~Xb;kOEI{F*Yx z0)+PswCW2FcnQ&7>!Kg-B0WB?7Vqsqm&NZULoS#<_$}I>3uf7UU?e> z$z1At3HctH!#XX^mGqPa=b^L4cxfBbVFD|Y(yBf9sH^It+tiy>Ce^dW0_7up>= zSFUg61ZAu-k#9nn82I=+3xW=+D5zrOWhFwp^>!`<&_cJhkaSVHSx0|?L48HX6GW7= zQ=S3FvdeZV%>|>Uez`P(%z8gobd}jOL?4g`3X#t{Gz{GRhLV`odtT9!ie3ETx#~~t zq-0t^-33{tQU1nstuaVj&Kb(=nSd_WnAA>RpuOG7IWM2DSSf5{V`C#S*_) zB`W!H$pBIMu|+>!cNO56|3L9~{$>w^Rhy^kSh-`zqech=y_-U3JhS!C5X zImFY`V3jrZ>Q2AzlXp=a`G}Cp5qjwF=aBo`+pWKUojROKRWinfErUC`dXom?UC}gK zR_&h0V&hUgDvCH2;AiBq@oh`d7#iEM(h>w@>6uJ9gt74foV=tvt~tF2cT~04%o`hd zcEZ0lTX45dbk>5rsk9%?Y>=In;z6Hq6JuN!&`nxs=MDu zlCy20v?!57Jj}qjPRwp9{n>ptrRwc?(W~qZBO!;+e!I1(BWIdRsA@@f73a^2#x;fQfqOtHv6%nmms zNMCjfhjFDW@l42V`$L;I0O>65YzsudjDp%%WdnFhTU_S6jQ$TioLfmE_Zx~^!c2$c!Li**ykYV{890SIn zc*RHw08B&2UuwSdPVV+AZumq_HCP-En_;TQ$H#L&@N^jFQ};(Us96NJxA-18_-p?z z2@qg1_2^$ozAtv=a((b$dtBBI+kR>AJax@lsIfH)W6x{(8s*szdpkqVn>Ik?Y!d!& z$**gZSL{Yuo2(_FbxxK4yNpA%vxZ5*bN283m;v_zEEu-!jc)iTO{oJ0W5?grbIQ2Y zHEQ?4*difk0gTaERRQaqRz2>!ctSTGM;7Wp@OES3o!&trN4CjM%6#*}s7=L5#=Oc; ze5ZXKP}|UEca$rMZuq6{c~QQMLpsGmV%4?2)vNI>rN)ZHp0PJ8C;SwJnQy}WgGLmy zRDFELp0f;Wb(~VpPKLGO-aF4@F1)>WX3#>naVgSr>dz`kUl%lPDWfJ`aq>&MP^mQg)7h6D2#&# zPVQtN#oUvY2QXApk0{i4@^Javx&K~@ihgzL{U@;EmUp6wZ3E^N^{(Bi5QY-iV30*e z7qMBKL}D8q{p4}^=VEstUDWfI=cZ_%c?#PXO$QL$Sd)u+YGG8rwc%K87NI zE_9o#|Mn2S=@=i-BrHS>{6}TL$ALM(9{x?*6ESgM|M}h#!QK9*lREjDeXZ&^;j6yhof3M@ zYEbGc0~(XlY^vqet%mcu9@3XgbmSakN=A0nlQ zPjNgr&pYGhF0`_1DCgP4@DV{=-fjvz3A4!FRaZ{<;81y${dJ$6@ghH3bD`;tJ5vvt zG#=cOl`eWmJ~Rq@m1eRk^mu+9;amP;|IQ_FR9ljOy+%d{C;V%!h)mUo+CK3wZ~hG_ zU^NB17QWOn*nf)`m>T!&lj;gewA1`_CY@h$EBEM@cAszMr_7i1uS|Kd$vGJmpk#bB zOQ(F=l2=!k99CuN$rzWT3=)UIxww|Ra~B+S~A&X5C%p0}%b01;^TFFL~j9q!eo_SlcT{}a>Aly-l4wT@vtBOG=na!ah~L4U_$c?1Pm`-gefKPI!>qybR=-(12YVEB=*p6RE2 zw^c(jgi6oj7h67H^Ehee0G__vw5vED838YF$1A1m(Sx%-y>|!4y#A~J>`6pL-jdNvMC&jfS?;jk zcKGh;H@+;GKFw!|zhAZC1x%j%lZT#kocZRv<^WlpZD67v|K$=TBN%{gAZnHWd5Gsv z-(~&y63J!zY@TIg=ci5OE~LHDaJgEwPr(m~ipKH4?E}&Q7XzJFWKnSff!#V1H4KiU z^K2#G0>eW7)4O4hMU-H*4?b^xZ=Ov5IJDKXH;G^LF0!hupsBg^iiq)!!RjQP1NH^A z2w(@ZloI)Dj)`p7H;E#uG!)hDjj7dkvQQhk;R zCZe3bky+q23%6WK28oc6VzNgA8@S~xmLdP8~0 zebl{zviLN(B+FpEWQ7dMp_Y|D!+Ppx%6 z&KQ0+i%gW?^zn_5NobIxm7%4cw@XFj15@F$rOz$7tl_tC9CD!69;P>Vu8EvC{mdLp zGss65hqhg8T-vy$5Zh$MJEJTEZPEE)dZ8{fF1<9*%PYYZ9WB4#7Puk&=)RV63(RSj z#ZP|rE<^wJ(P^>a)i_78R^+|>J)gsU9t%UV1uA|M z>Y!n7?-Baj*OI5|$yAfqQQ`b4t?pHJ%;etQk%i$`J;a!_7PRIaPJ>c7a(mxOHfCGS z#0|52pyT++&)k2NMQz}|Q%<@r;JgC}UyY&FRCV2gx6I`Zk(-y46v>?XfVl?Ysz_y7 zNBoN5&6?Bx?>hRjhOL>wyJ57m;U{$O%)hHDLT6hXG~iz`U_I$q@96+wL_rZ(nn?gR zZalCvu7Ce6)i4`=e*O-Pm^^5aR3klFXmKH+g8CcYW|^1DXYqeHx+31&?ZRz$GPsxfbM(!uAOEp>*QzI$;%~N{uKveq5QwB-&jnLaig^R zqL|(A{-4E+TAiN<-iI*^HS=fIP)7UK-6#Z8IAm|oq1et1Vxs6NMEj;fFv5Zkc(ZNn zgI~0Qlo~++(YgR1=D(=RaLAZshsW-Pak?J(t20?Ye_JAyXf<3!#6Zmd=mEBtP@5{mSrtm3 z^0)ms%6?|ggdLT}2|FhZofO3nkV=;uGz>M~VI1$m=auromkCXNA5E*<@nSsy;Q{JY z3@#KMh`AlxsjVP}`ka+%(!I66N9?1S@|s~sQOYYn>gxD;y}IR<0B*ANGEFS2??Z{g z-7As{sg!>eCO@uHPNf82%2V1YwqknraYa!( zU2`+8ZXxTp7-<##wT#5tfyHBJVnQ9%SR&yhk~!PLsM!ZA>CD|J2c;vayM&H!k3G9@ zg6qXjG@_8GTby5**!&0^UwWe*OkE8&LAJb$yO2R8`bNPLrBx0$%T8@_# z-`}i>QcWT>Kt|+3ma$^O-D8LI5xz{fYO`{N6{yeqit4?7O^!YNDnx2?!h zwOi{a3{2S^EH$noU-;j@8Aq>oRvhd9POx8uahMz-OY5v8MjtC4AoJX4VGh0R@LJ`2 z7Uy_R=gQ|&uA69Zb3QJ})h4s|!Hz^j+@pi}HhBt(ySJSIdoG6e>*hk6r64zhrtjbJbP+=8i zjBR&Jr*m`kmvIu7ANCigtY)V*0^v4)dD-SlZ7r2<4F;8v?^HDGC<(x)7-Ja9u|J zW+D8^vrpW_Qdo}wpSbd6gw??5%M*jp zsY=QnJAydTLDXv`W2g1n3S*ObDuV5)V@<@Vd^HT@z`Y&2tfs0BEJ_`f?pV?h+m-G) zYWF|f-B^Pit!e5O>ug1iL3jXc>h6hXzP+J@g(>z{A-sp(h_9S5&Jsuiq&_IDnyNLD*YM&uqeSR`Qgg5-s5B|*{eU5n7UTV) z7|D)mhm0^^df|;z)0h7rWz3D||0-imyNBbLF4wX8qvn5qKvv#4Kv{rK$VGk$;aT@U zm=G9P62@XG6M81RbgHd(-dDJSt`ei`A@OeRpX{TO@Hru7-44t&8)?vXyg>QiuDJ^@ zqMfgWX6)AIwq13j54!JeK1B=l$A*&r+Sm6%1yV{DlLa>Apf53!6zvIPzHW^?$MZcaWm4PDM-$=eZKjm5xZGl~T zdVj_{tdP1J141Jl-NPYWHU%3Swpb_z#}Rb34%xY`4!Em}HMg;HJjQWuZo$yYe50&% z%na!SvWNB=IzQxF1zwo%7|c{TQ-t=|J%1{x?05#`{LHF4B!&`mP(==+xMXxS>1(%Y+D~^7Z$d|utPWIc-E~s+YPS3=pP-Ysdb%KuV{Dtj>j&~ zPn*`aUda44ZeqI;wYG0^Q+eI*^T?sqKD zlp;({%ws14vs-nTJ($wE*$_KzEh#B2?e_0{z@$?!6{;t$MKu=lNa@cL%YjZJAStb;NlqOpI(APe2KW5Q4!ofsfp$)ugHznm>9oV;{*gUP6+Z4rwRWwj*?#>pd)iHe zPc-gANh~b{G75A!I9^mQl@hlrwGWP?&96sWvpB}dw6yCy8mLDsP{UtuA%DShvhSkK z5!3sxB$vPEd$mMi$|x`YKCdpd+0noOp;7J!s?efUV+rTwTybuKSvMx)mR<)4{FVLxO?S zb)G+t%T)@Zo}PU7H>V4?XFA-P+p93)V2+k2_}hBA7XOD+uM7bz%3+9l$WXIqNqITv zBW1_ko=tCGpV57UoUSLHg_cCw0CBWX8woHLgMW9BMGN12U}=9uGwGQlI&8aryc|sP zr3ym{3Z>ra4rl3)NDY)96>Uefg?!#%42iu-vZ!lQbavNNqp-P=mzI&2(DxG+37?vv zBMBAI^vmCSUM{O5v{9)(u*-1ke<#mv6v(6x%;j*S=Ib92<~V*$4(FzcoBm*O^f&e> z^P>L*zY}CpCPebbkALbA$g*cRQ0w-_m5)!D04GGAm9mBfh5NdiBw3t!%2wy?BS+iI zvv<+RVc1o3oRR~+M<)#a6y{k^!f z8&h=8MurQ^7q8;Iq1pC*<;3>v+z3EwO#hUW&H$572&BrOlcl1i4poo-M-UcS*bH!6 z<&yJ$u2eJ`iYc4*&lusa4QR5^7GmIJJs(Tw7FG(IGd_?b^loyCW```*om;B}jtgdp z)TrE<4&#Z^9Gg1iePX1m@EDZ56H+EYHG>23S1m{wrs&C<*2j5YEI07pZUkXeXf3$5 z{hgL=hAx*~TsN(otatd{-Z^exY`icJ40ZDMX~jemb5pn;D2WAqi-at-JxocFZ_TKe zo$ubC5j7CK+%Gwsn>_KXZMlL@tNfH0vzz9qi?vF||qC1$+hC+tGJ#7N%pmP5&_lc^qu zS2m{}K|@c^g|ZhHM|qMEc+$FF_?8Dw*dCTDMYys~@Ms6xlC$0C(`Rjg>j?|T2U~xu z8K6zpDyy8zGLfj=Ei@ck_=VdKHHf=%AXQ!iXV*T*ZiPBEG(`x8GeS~D;K8y#bZ{Q8 zy9hv)#_P1io51>sEmxM8mk3Ek)b&Jzc}s?tesT;?n=2^K|&zaQ{8+wcRS1 zILcX3_#q?rv|bX*6i#|vnI8KSsp+W!oATw}2hm}q?SB#XmT^%=U!&+S3^0O}bax5T zokMpB(k&_79YYC7gLI0dk|H9Fpmc+jL#M=$(s3UC@B5x}?)`Yb+;8)HVDG*5T5GRf zdpGxrNQtdci8v6N`R8kv!da+8pZapOU`ZVy-kkZ*%-gM>i=i$OH?};R*=-bRzESv< zOSWV&lIKoU5V*>;Ki@#nDRDs=FYIK!-gSXow7L{W)N|MVUdJ}m_n8dbA1EjPgGQfk zz%=(8+Z9ww@3RTvouY^?ir;^ThZQGwfBWr8v3KPJ857rTeOb6!eCB}* zLn<5o!X@3WbwhhO#i5|Wh5_4W8rzT@>0WbJ_&zhb%_Yznfc2U~7f{oXwD$B!#KAAV z$;pK@#Ff%oxv3P{mY*cjK);Q6JO30Y6JL3s78=Gzp!pL}LY8^W4Hx3Z3YSoum98-R z1HX?<9=LL%SN>SXo0)q?|LW^dPwh`KTqO&?aVYE?hW1otzwKE!jXUhc8k0)WHw(bO zIX_ekGNEMFD2Nv$X(7*-{PV?6AoCYK-Jb?%GY0L&#y(d$-d~*$UA}ZV=I^K5SM7y9 ziS|vN{8DtrdcAs?dJJ14giJ8ceUd@`ptqF`rt_tMIVs`;daAJ6FoJB>EJD3BH~u3& z&K#fB42MYonO#QIhbLRsBfW|=K^746RX>m_@Di?Kl7`UN-i$;cBYNMA-^72vFS;C2 zIr+3%Vel-e&6$d(A`S{v?b2B`sNslUti55gz@$F{Ti4tqIqmE7{zwy$Hz;V;_Ubm6 z6)x8E{&3l(e^G(E`W=wfe)r02+OZ(#)w90222L5|`3h+wA)RmB1~pE9h;#Pb6v)l& zf&}G~^C}VTsOn<$2AX?q$s>al41c5hs$ygGBu=&C`f`S)HjkFgytJaiW;^^TvTl3UyT-qNf0FVZ)QHOwqxloq1fyP|p(fD7nWv_LGZ6}it% zK4SN^f;L$|l%E{U`8>9V+9~z#S76}qOVfS{Xx;uNa-8N)jpZ8r1&6G23+}sKM##i?Rq|m5)7{ss~H0& z!&J8?I4mBQwle**Nh#%yl1d~#_tDI8|EJq8(YeWZmFcdd4;c?{LL#HbnSPlE91gg? za;?4m`5Z|rdVDPO(9w6{6n*2o+f^C)+uOopjBV-f zC`B+{QGd4SZS^04Yv_He^wIn;bD$0{$c4HB$WAC<1cLh9%hX)+{52{@!pg345O4o) zq$064Ja@p;4C80M`Y$*#p%FeW650B?vatNKzEA0GrmG(xc3cDS`C-)**P>1HN=)WN$<5G`3IjLr;hV_zjwZgXmi*%rl)F!A5jX@@LXPIGIfZw8<8|H79hl#% zVZvhx^e43zo@4rcIu$yx6oE5NQ~0J{;Wekv`-GY}oX4OYXG`n4U#}4Y!_5f zswxe`gM$AnmhWq)acqKT3;ypqR7x7^Si@gw{87JL^sjk{-6Z=cer{a}?v{c{Ekx(t zYcAa0{kPK#>!?J`PRvK{ zwi*{nAU6vN&QlP=^8SvU56wyVYB36dTCD&HNdjtb!CLQ>0O0EIAAtkP0^5;h#y9l7 zD(zCuxUE-vB`w$2hl@2p6nQ4>@Rw&d>FC~QXf4VSJ7^g+hqj`^b==?|noU+L*cU!L z^XdOllJtfJ3I!iTJf{1ql5gx(K8g@w^|h^QA_wLjUFj@7{SkP3+Jf<1Z6^-}sob!p z3kq>g4JaiuE|8r%$@fpRwUhgSRS(PvH4z?pjD=(HyD)m27fef?jH#Bqa}jx=-9ZvgktoeK${3>7zjHP}!DB-12@Af)csB1%Vycm)3>B)Z;W3ddYXFt_G&5h>A)-Cu7#6MA_yKo?Y zwuRMulg;L8cvfvTcv2NMNujOCfQUf%I7J|ko%R}Dqn_Gb^*`nf>iI^emn~d^L8spr zsJIFLY6B0IK*AIt%o`<~(TVBz%BinW3g9`9$9AobI$#R|Ky@meb`Ik2KXYTiZ0=Xte9I$BU!_r5|778u|n3$n)={UQ%r z64x1t!~tDjs!hI1V|0O-cX>Uz1cN{$dy@25$I9nohfjV)K6%YLLUE!u1s>QT#tfKL z005&8>x$f;P8g@u(R;xWTN%WFqRcH~w|F4O-+w?JoWS+lF$Sya>JKj3_+zP8n^pm#3L+=T??IyQYxy=y)R1C5|tM$iL4 z)uw}S#aK?OsXYHy>wm-B4vyPwvo;=B&vX*u;mYu5G&b0mPU{C&(nh`>X%gSAOOqZL2mK5 z)8mQig92HKwvfY|H2?|NM%p(CJUfI{7!Cx07>jH#`tt@5CAOQ7$oj4^BS1*TiALH# zzS~q}M%OIDZ zC)BL}hAvEN{QZ37b=;$^Uev_)RAJ>?UGZxcfw+{ z@}s7B-jO0~8UNNPsr&gffe0ZptWum6u|2BVl$LeiX9Scx!X}u>da)D>Hfc+Ky{3JV zYeC5KDBaM(1Ye@Jz+PnhE%X`c$M<-ESE50fme+_R*8ctk7TR#Ep>3euY#GdX?=P9V zV9cJk7o3#c)80<40mM9IL}Nn8%_T*malrzW&jE?yOgC=-Hrj~xUP3+P3k+!+fgo~M zZhgzBa~qQ0HlkC%fep;MrFmy)zv>#<4 zxNG?E#^=!RG#Na)S>+7mitwKcvbA2)!?3Hp5<2*@a2QN7X}~xsV&dI*F@Cvt!WJYr zN92Nxep*|h&bz8Vy*MG?YHEM{P(0f=iO85(hz2=sj&x!?lj`B^KmpQXI=6UUC2w%N zb2ZM2#kksOR5eGh9XQy=8UPIh#cFq#ytqIKvYlxk96szA2oyllgO-hwElYu<7}5LU zHjQ-+s(;vGK5RVET??K1V=TCny=HujTd%|W5U-m=E$cU;PB|V=;aW8&e_6Lc86w-X zt>-{RDn`sYp+j*D=!9~yaUAjeGlp_h+DveR+4UQgh~mZwpX$zzG&iR6&4mMO0K=dM zD$NcrT`}RlVs0FhjkM!z_;oT!Dd1sZJ2z$IQSkjnLF&IN%NCC+kJd@tZ`_{eS#}AX zOVe}y5!{P=_L(b8{yr^yRVadhElvR=#0REi=u&jN*P{@z-CIFRL5k6U!v-YEFV%N; ziG1&Om?S-DcusGSfdMn(HEgBNPZWTJ&w??C^7Y$-&C}^OX6hl;nLDj0b1tcEXgnyN z0pBH3={XeqG6AIcd4oScDeL2g*Tk(ay9h^dz-wM%=^_bys2cM0T@t*{P1|bf+8xqX z0_~HPKD8IF53syO$3hJOgT&TEKy_pfuNp>JYv0^6f%HZXUU*sUpa1$Wim9!wh!>*) zxxCw)9`8t=c1pGTLUJG>I;*NcR0@SK`chwvH~+fZ6Zfi|oHoFFA8v%9i0C2mrEdPc z)`spkkk2q*PZoj$S7}&uRZox{E6#)cjzpl+F?M2sC*ry`mm1E8Ku032Tpwv4GM|K& zE_ccwKDD+EhrOq@h(#Dv>^-{wt^*PdzVC;lqUkoM<>TmN@gd=I@cf-Qv4TeLP>+lD zS+#5IS;kU862WqzUzL%{^LvTYgz5~YzkdO3R*PPJtcAVU|CRMr1X6~Qs<^Kbp<0%~ zv?#=np;G93AWK;M6eYAeKf3o}Z2O?&vzt@SHog9OvD2!x@I)-^^F~2l`#A~ZpC~l< zLlJ7sY(HPUt4x!+*ts7u+-ih?xcPKy+c1kfI?}niu3^yvi?`|(;N^y=PUAz0aZzTb z4XrLr106Mxw=Sn5ZNfA1%L@ZB#du+QkL8YjQDv2lME)OLSKL|bNS0NK*x=+K8DI6k zndz;)Ar9^uy^DUG&u~|`T|+L6Q4rWG>Tw=mT>=l>$mCm2s22_-CoOQBv@u>AFXkD< zh{?C(TtTZ(zeORm$4Y%@aqgqfSWUau*AnB}4w0b4#kV47Xu$J$1kTAePj%Whon^Ft zLcvsVKvJV`jjP?)17gH-QT5vNv+q+5^Pp+LWy<`JiCw2Wp~5B%(X^%os@#LE+@)`N zf0QI1jTTA3uK4JxZY`G&4g1FLyQ{bN*WBY`%;z%YCJB>2Lv~_;Ur}(R6P5Nivp?W$ zqOiRNvat$4nkRw;)^dn=Bo;{6sVa0ea5u1h!Tn=I-k_E{W_TY{ds+O%yLv9xY2B;2(evGI|DE;A7H! zX=W@Ia1?>zueg$)axOf6JSR#$egdwK1wJ~}v<@#Y1MoY8ACVpouLjoqgINxZ6M(dk zHxFUZ%@kN7LQTrJhZz zlva>OdP=j#K0NtYb7ibxS|Uww6%NM@t5HBUZ~Y8Do11C>KkTf;wYWyZt2<% z6!;Q~kv9PsO9Hu;^9|)=EM`ohuQVH7yUf~eghQiSx~qxwR{iyNPGRZTqPtMal2IbhYy*Pg)b5lb2gTT6{|7Q?^S2OXgFDd7tS&BKZLn z_~w+RhGBwvqTnw!%m9#y;pWd)>9K|16W&igd)HEypi8a!V7qZo17c~?3RScXQFuV@ zmv9_WCf{k#*dTE08NQ7VeC>c{^M&diaKfUyx$*(G0>%!FR7XOz0g?d43t&=3%uc;X zV-%xL><8`-As2h@djUm0ZBmc?6_5l^1GQ-DlLxL+{8~NsvqgCC?`~~$SdZqUr58Vd z&gyHv=H71l)|RTzBJzJ2wG&Dg0rtL!D@j%pFWty52nlIyO;=LS5YWH0ZIzle72>wRQii5@lw4RGi32=rE>=!e< ze=DZ6TAf`JkAc}RV}d=-3nm6gp4G&7!&gqgd6DPHbDK0wyKn8iKxL zoPpB2SlC{DXW{Ys!4H1s=es@Lri-XiY0$ZHHt|mVq)y4t7Q+P@HQkFsn<3rye#7^aG_QPSoQ=1-1kWchFfXmY@4;+0HNNKPd1MLR& z)DTx138a`9a6&6g$;k}tV1UHst4fW3i5DXQ2wW@I5u08?dO*rEWYMbt!<}Sd{fmOE z@>(3gN9wrGoSwvxVw$j`{<{zRxYzq1shjYha{c=Ogk^|8?PkX27u&hSg>`CMLlys1 z3otV1h1Kxxa`4a0@a@wMh1pdb0Hr2iq>~_CqUV^hJtp5NAF|0Y`NwFmp~lNjQB%|8 zY$>FxPlEKuTpUcF;bIe{2lksY%aQ|-4gJaomQD$@)SJKpjc&OinXgLyT|YCykA5Vz z_o)w#=mr!WW2m}?ZA9I)(looGY5h8Z?tFxQ=v@{N@U~t0F4S`n{fERBZ~>R*D2bGo z7KLib*D8{3Oz;J*x9svrD>`nMcplH`Q{?35$K?{>HXi0dJ=+++`U1bD$&||O`AnZh z{rz)T*x*yA_`BNbMZtT)kg~n3Vi;VJr+GO&yO5_(=)Sv302k#9L+>^3DPIkgpQ|-g zh~c06w>DhB0aKWj1Z;zIOSaUmSM2 z9@8;S)%9uz3x7mdz!}Ehi11@3(anz-1aVXk6dTg|bGm#JUN=7_#y9)+I<5 z-TELnF{liZ?TPax+|z&FZXbP7*%9$nK)!6f3g6+N!E%t5PFDN%v*a9%aktSx&91%n=?=AOebdDP ziC~A@OcArbBGCj7k(>P4BJM)Mn%{78Jam8A)ANeYo=-cDk%b4`JlD^fzH29KVGR5#zOgD7u?cJVURyCfS77 zjpw?yQ)(50__1Ma5j$CXn7yHIp5-C2xs0n}?9vX;;Sm6KhQs4ofCJwg6GI){?MoXZ z*JX&QL`!9*9&YWSiGG9~My;pS%M6DbsaFQ&+qZ#3&?;=QuhwhYx%Q@%MZfx@d`D?R z>yFhT{Q&ppJK-Z1WbxIA^c)*0e41DrcVj0jL*cCo;7e>_P)w@90&!Am=vekFW@-fa zvS|CsJN3WHpcWvmak2FE+AgPT2Da*`eVIwGwlW+5zwW%u_i{FsI>PUN0}5dr4-Kfi z)Bmk?A*Swe0XkzV62x11*;T4i8E5ysSHd#U6BRCyeMLs5H>0~7nFt7lojK*l@MjDq za9VDPDWrDiMyHs!fV#F+8xg-Pb=4vb5`!C8+p0erQKAMHIQL*#D_+lR!NK7bE zU1Y*{(zi)yOC*pi_J+lpnDW%!@_hFft#^b_#@#Y-$ZL3&ranlbRwnUc_1@hsRTaw; zrIoz(wcE0(a)f%v>A&xmp*b?v!ytgj(aRDEr;}sDG#6)+ zRhk4($kUw0GWy%yMt7G580TJc{cXw7uRZ)Tw>WV}n51`c#W5Rsk^%Hp8h*w~X#3ao z5m=s+uj-kIqlT_%`|7X#d4TlG#G{6usDv-hnbu+v9~n}C$egb~&ZFr`{wPa>3HnD& z@rCSN3)5i23D8-Y4?-vnMPIiG{}NSs+qLlGxW4H|Zzk}uBet#+p;G;#_n;sxF|;B6 z-MXjo_nYW_U0BnWGtp6Z{B68v(8iL^!CqkL-cM5idsv`bvdH;tG_p9ONt2Nc{!X;6 zg|p)-b@uUf0EOmjiby8f_p)T2Vjx6%a^0S0ha5jJ%6TiFD^KT4!WgoXahD-!@#i+0OLAZlBfnjLp?I&G$pAB3cQ%~DN&UM&~esRyXmn$F@N zrtG9(jV+Tolte$Jmn9bk?u}{wW4sRhY-J4euAvt&XUh@k5$HI&Z!RqN7pK;^amXqi z8R_9u&=Xk^0Slhmc9q`=3JvAv9DT;#auH?W3%zv|N5cHK0fl0qN~58oTJTr&Z4$_@ zX5T`%rjhUB#`TuIS+e7v99uX7Y6Y_INO)E0L{Z{fJOIpX^+JHW7V5Hx3xno<^_ z9`&L+Ei4+4Xx$-_7tW6<8|EN=a9Pb0*oIAX!u%K;XxC3ND0eZp3139YB(Cb?ZHw@p zj;?U`Im2r^jhRZcLuTnZRJBdAGgp=H$ zJj|7vvDZE>^8*Lql z^EWnGL7EOw?t~`VhJeOjc!Dyt;o?yI0Cr2%KmGlLRUdmB`2=(@Nl7Xeyn2ZZuwCd$ zPNT^9h#=ZEr53@HyhBqlAH5|WN7R&?$^GKLat4FGqspr`^1_9ei0jaU1U}!-Igctk z7&Lc!_k_N!jL-$uEoKa>gULh46+H-N1?uBAG)&`b)75tBIg_XS&WzWE09MU7E(~fn z@~mJVHoH^Z*;QwCLU$j=BY#>k^q~$gjn=<+`xjEii}n$-)}10qT2AslrdZVg-3RfA zkwzX4j3YB?i%_(4U;A=9*0NThQ19<%<^bTX#=*2UmWnDV$Pds!*r=}(=o?jLrwfC9 z5m!(BhK^)Oc8a7}i|_LysEUk(w+>(%qe?FZjPk3C$|$xcBLB(pyKb4T&lBXUFG{Dh zx_>jJxPWeF2=Ti|*^ImM5&8`G(+Hh)R0)XKMz%2PuV!oo1%Ur9+|c?}-+N6C-yYL^ zOZV-tM96y*CS%j$XrY-U`JDSaG;bBjjwEQN?F6Yd=oc#X>IyH(SqO#cC7%piMvHpZ zaI8CMJ2Yt|D&tI>FF$V{V{1XhJS9B32V_g^%bTCx^>SrvbeE6aL45 z691#|H}xrN=55>Cb?DLYuMY~hAo)X^mR`*-mAR-t2}jvlp1gYj*(C`Q*ETlKv=tZ) zE$Jmx^@u`P0p+(ffSU|>@rz}RMn2vzgG{B2)Kv06_bYvB_!G!THFx3k9ieJ-^2zsO+e`T9U@N`wG*!ymp$F*$13&8P26+{wi-4<> zH`E*I+ZqCV zcB2J`p1gvBEdg(X8r^U56|frpTVRo&MMOX9AF;ww^=0@a-p*rC$G1Zu4POx-81nKf zDK8z)2B}*1ZvW6S)p=hkBF;(%KMn3bQ8_5`e0tRcTJ?Lt9!^x&pdBF z`hWxb>gTTpL5Tx)WDHUH1nd!Jji;ohmRI#=!2@*IvWR3i+Ko~+V{0^FO9uuMPOLFB z5YspA0lQuDp3$}iT9ltdZW}#9o!hIGFTVy(tcm^B93!YfBy0c8ZY63!at>UDQtl|RY=c^SesE+0ZO{~dEt1tY-1R$^fGRdnc zT@-<@q@mG~7jx+(Vej1D_#yiB!l4CeN#OtO%hUSc*XbC$V#|B3bDNK@SzU;vWeWk$ zqG`c@(zB%CvcGxCvx3fx?}%&P8)?0KdF=L+nK>LSj8(CSJtdGF+@o9;6@~e6qbLD|64e;dIbO8`n(X zThg3Resk3{yl*^bw1clG@S)L&tN7g7MmK-C{^71Lp5xL_)@qEo>1|G?r89M$#r@h-pSXGg)Dggt_$xe^lO;vB0PSr?qj-M+sj>-Mm-1sgNrwmE(FTqSYxK zOH6aX&bZ!y1%&GI1B;>mM8&{AQUleV>1M@Ire$SsNS5{8 z-*s01oz?8Qzl(r*eF5J;E%MX%zcSoLrDmT)S4FzgcqnyVFD#4%m>{_6eA%H7wHo}9 ziP$bbQ$8Qm{!*pS>0jjlZkqvwnzh@%O!A?Pkh8BRCw#X*{9cHN$g?%hd>-5`&@=1! zdGTkjvQj9FH4QF@R;4f3q)}brmmca0WKljp^p1Z}OsVk{OU$`U!+%0_IrV09Gm0G@ zd$+w-F$s)!hYs@v(fpNb(5S7rqt7!-F-nS*YhS zVS4(PoL4sh#QJ6VthTlX;ZdOaZ0JS(pSwl(fq_oCQtFAA#6qK&_1{FAF{%nPGTvTa zCj^+7q;_uI2dE70L7Yx^uKy*Pwfj^C-`uNoT>*Cr;1p_Wf8jbwlJPpHT?kn5PFWEO z<4Gl-cvKx5v$Fl|WWBZJB_>a3U5)&&y6ixTh40_1&|fdUAtE~DcQ|2Vqx|^rxT*R3 z4VTIgtPNlx6;Zh6uJ_#v-y*UdF~;{%!F%s?O0yM|$5-8U+t@FKIq_STpE)3wGZKd~ zfw4IDF;ahp7sJ}DnFW-)P4hK-4}5X36d}+m0&Wm_`e9aE=b{zb*%8l_y7MsTb`&lW# zrC$3bHaB#5_#TDHX)To2KlQl6cDpZDpQL>_I{MW1g?+(*m+r!LrL7>ZjZh^4pseJv zcUnw(8I?ri*Z1vzaasSfL)DFfoB)xSlk)UTT!+qC_UnLMwOF%a-1ahtR5|{M4w-rh z$v*?ZGUwwatT|tw*H#^yU7Y9S@J~~aO^JH{T`spHP+zQ5OSY3#ayHt0?)*Y2A-Q$= z6Ttes;qWmO1UG=A55?~5YfZ28_1Q~Z@{RPq@#8fcS3Z2368UO2CHNba%`PKTziwD4 z8i%TUFW{wdZj8pAn-OV{o!Z%JOCu^H`LCBaQRbqqsA16Lz+DCVL^hy)jud}CWc4L z*|wJe8C+k=39+s0=6wJD;Ds8%nq(J#;@6SA&m$!C{xEWH0osc7#Y);^A||J+*`jL-8r- zDM}rcN;N`P?>z9cFH&p@m1AONA++&l=7&3;Olo5V+GiUYpG|DuA`V&|m>qLcq-#`( zO#28!3GO2Erirw^77<~zW=$hnr^9`_2fQ}rxF$L%|06Y71$%)q(gjf=fstrUq+}>W zxtCxeK#83IP0AG_o{T&~k?xN0?aP7pI4v_O>xvrvLgsF3^$N z^spBCiJ!Fs3+@P}*ls3&o<;1tK7*i9JpV9aa|7GA#DcBIId2<6O+iLqIJTGSuDzja#YX@cR!6>HB?74xc|D6E3{ zV&Y?BR}_h9d>m&71nclSRcUGN(cqrz0%jLMl9X-oO`GvOVLZaEe^GI_sI|6>v80o{kAxY5n!;J1ecUHMlHcXy zhH=-J39$GU1cBj)BMik&YWy`P&+eUcS|3nA(NDLsP)3v|{*FAm0l`12jT8!DUIaGU zccgU1hFwzhyzvzQue=6rnt2pDILzm@*?lm~bp0KK3U^g=3+>&5!iAu1A}dyK6;H~& z;pN;hrqR#MvmP@>wRt1J^N4Pl%I82Uv-=C)KZ1pdr|yfEMx=gIN7W*uK_JB&i5GUL zm+bI~AL7ygXOVs_ajAgHy=cDaZ)4ICf9N%mAQ7CsffmN*2n2CnNF50|JztCr6+aeU z3O$sFXp6?Cnz7$13_N`S+~3;X;L^Wxt?99NvCOw{dg#PsUkXu0QmFIVy~a&1>3ISs zbSMl6Y~nB~W|5oT70qHakV1g+9r*N`!Ik#Qa00j-b*sgRbDJY1 z5H!pu)GLm|dGGmmkLux{?Xs1c%thm-6-uI7KLrW%O`maA^uzb8pGjw_*N5tHy%r#; zoLn60y>i=!G4SO$;w$7|h00NG(Mswd9W7qt#;9lW>3Ju4WZ@em`Z^7@uv^iw7{2LCU0tF<=3)U4 zJM5gsb*X@{0TCP&MSQR9Wx{*knaFIIA^Z7b>-@C?=njRO%#)2jHb*(9@$y@L`-1BG zaukF0i-XsxHej2{jm}fQCGODegsxCf$#qKWSSZ|zq3SGgU9Z;1+%{*&@H{)@iy`sQ zy$#4?+r8d9jOU2U>14a3_TT6M?bH`HR`?6LpuhxXfJI(yl$g>&igOT#vUchV!mi6- z2gb_`|D31rJ0kAokbJeCrkscwpwTTr@>DGMc6+q-*EllR4#|-z;DnNc! zIw&wLsAyX{BuN{h_KceY=F-%vj8{qMj9C9(`Vxu_2sz~GBPJ90~RI7j{N`;kmSCf=GnQkZw9K{QGkAU z0kIG_CCnt7dcJ>TfHo4(q|Da##R19_&o5!sD%WxEGO392b<;yr$g;0W`E#;A4ably z#|?5V#gQD+{T<^nX)TvRCg0SWgr57(I9E4y|4JGQ3B_n;C_+BeNT5r-Sv)o^76cL3+O!{3g0d3jxIQ|CqGvLRX;vctoRc- z;n;|PnDZ$84>P|g|23|x12@>jb(84BGcjAa5)?Z)U50-IDl7 zhDczzWD+qtBMwHkUv@lgUiV{Lc#Bp; zvIZR&>}_@)qvE_Y2OfRxlIiKJ+kuPS|K2k|`+iuTTp3iKtZB1zxq<^JX7WuIg3c$r zj&8jB=i26Kvb$^6cM0pIwKWOoG`e$0@9_vdU}q<3H}e}!%9ssVo$-yh5U0$g^>uz$ z6Cm-d&l7n&;38iX)^3yuyn=GG3+0W#BKO26LClA8(AVWyj;u=@zrnvsqSXF*f4V#; z;cL(aSfB>_0*OVm->GWp6d{q)}zu_(3;vxX7ZBxn?2xnjRSA z%Hfsxd7m^;S`|%{_zM)iS0EK=8$*>D_?9XzJkz{`A&{$+H7 z+9=I>PxN3PhT_ochxfME?Yh5#Q+D(fi1t$jo*Llo*clM6JltQrFhpWkq^idgp%6cR zqkCF66FOk@FG|douCQC{xcBWN5h%#5Vr=c3qhaaJGcs_!pcj`u67hGw5GXzLXvi$` zq1R5e41I|SwUY%Lg%6cRYFhJ3v;{ay(l`w0mn5`H5){gV8i2;Ft6mCJ2H32t1zeSX zS(~o#*|Xyr9=dVyX0c7pJ{a=Gb=dZciTDKhxky^{KSbYI+&k+kj#rcU^!C-h(Gr0j z(nbCr<|&H|%A5Oqh4;6(owv>U$p;Qvvz@_*g-iDhoew>_GgbT|>5LMs0ecbZ+tbR( zh{=#p^d1?egS}6-4lCzqwn-84MeZIL+BdFkUbJt*8J{(1KKvXD+4@Cu1<@A3*}A?s zyc_!c5Oju0Se&JrBiZ@MCk^TF$GLLhs!hm`;L6Ouz{soF!T5QHnGA=(c1r5i19R>wRcv@h5go`0_ zYTyvOp>+fb%<0)6TkYK-rr(~d=?Si9qFDIr&RY;l?Jiipg?RoG1@bC$tR z=Vc5aN=VCN`8~#qBhrRwOep`Wm@=8MyWq$uv+K{T^~z=$hy;v5;#5)!J>9-I%EgNs zEX^WTbS`>4BJkI>xX%$$X78+iiM1*gVKz(2JR`|qeNiQz8u-XR`cUqLK)w@>v&r98 z)}7vX6*9;Q>BN)hKZ&8gQ{LE*fouC_k^UK;R5`X7!qiBnW!CihdvxI+iu@(dH%28k z)(9X5bR^zHd2turVzICYXdPYxP-@CA{9*1D3-lEemBGjF-;>%*o#(p07yVuuK-iWG z;FEqN8k?>cV8NzILJ#}2Q8ObJ6NkJC@(!PKr&9Rnj41Wl@ac)fH8cP*E-w#s01<8EfL_#OsFtsRbA};u6L3GcUxekVUE- zVVqZxC^_$wK&h6Vnw;6k2Ma!Rx|QXS>zS@c%`KtScndT2V0kYGhsHk*qa!R{o#;$a zU%n$h_3W1`;sgh4gvdI!*PLGk(csZGUr4LIPl3w4gB4;<)Qzi_q{X~OlZFGe z)_Q^}klqLwP1Pk2^&Z98(+#Cl;NLyX@KET-pp?3=s{vVHmFtn=a^Ee`_KpVT7Tcfa!zU#I(^dJEB~tw+T(eLGc#8{# z*p&$^S40g~Zy7b*DJ|flTWQtDD<(YJ0!>Ej(f4o1g77P%U@0>ErqtD+1%xHlAfhq=?D$9f8Bn}13dlF6 z&JT^F*)otp24LAFvO-$094#K$j4mgRQCVX`H>Zq+CggHeT&u|`1NsyPCG}?I)gTIf zSO~*Wsb+{8=cHxI<&Cva$-D;88_GO3CvlqaWu4HaLijbHtTgVcT zNLGyavy_HYAzNIVW^rQWB)QbM&lqD%P86+oH(Z_bHV@88nRnP2z#?v511Wf6OY}!I z{Z7w43f>K8(6iLWn(SJfFK3NdH~TThnyBzfI1asA4qY!rHDpd=l<)MQP0jPJ@7k%> zX7eitwl5Y{m`{L0e^dbF@Yt^|Y;U=~sCJY;Ixsx;kv@j}4EZ;D+T#?4GQh0kLg2Qb zkOUf#2q1b1qvuwGh&$dL38duT>?RUPWm*S38wAz~#x+I-zcxRmEs4`H92~w2%34^h zxK0FUct~DyD_0Z^C9Izj!e;+UK&HQF$a4~;MCi*qwB|YU=A7=GbDHCeydmfi&<~EXa@?y4VYqT) z0@|bk70!eoI6deHntg+YxEI@TqvT5@ zU;q`+SM$vHJ5RuMb-xsn7d;(R!S&RzKFb{PquQ- zRr0y;TL`4r#GCKH6UluyY=+On`L}J)G-W-u|LWUxeSOhVB$4aIAL%(}&L8TkabsV; zHbVQ22d(mAF9xV;?pmlL4Y*4wj{)G4^o8EXL-`Zojl+d4E}l}wi;EGRE-kVj6#{10t(fh^9|=u6ETqw8J?nr{TZqoAaUm80X6l7bK8D7oa3rpC*C zF?#0jhHHa+?fTK1{@Ul1MVP)rET#+8Xzb1z6xHP3_KI`SjL4Z~Ig%Sm!I9t6T|W=k zx0rBDSpY9FAuy`Va9o_AI|csq9^VY)+sB@zFsPofxYcjtH&dIUn&l>T3uKE561Vf~ zY6x;UR~IzZ0-%y20R(L<)6k1^QY;glTWi6?%}?!UZ{r!J*W z%B06V<+#ABw6c1sZptp{*{3*%bw3An%KW zsAN&4S?DkL(W)9x+;6eY=Bx)kuxRHqKh*30ILETk`l zc`(0u2XAAz2)ear;lfxF2F^w=olS&X5e392|E0c1D3Q-b#*i`vavX7aA~Gp*ulg?SiZucETUAXtIM7lE4V@8^?VzClL(hZ`sS;+ zR451*zRNol{W7x+Y;{MY`sg{ElR)d@LbjhyvVv`{L${r>hqg$4;EAH{bt0P7AU-u9 zNBrXFQ2})dU&M|6LrJ#1C74h(A05?dTK$iH;z(AB${1@HoyMgYFTs7*?gly4>i_b} z|7z#Xq8&Z@R~dRZXdsyO-3IsDSkU=QeIZV!m`eTY_t1A!!QQQziM!61NT}h~N$Q;6 zOh9SCL66+``-S>Q z>}u~>?MHPgiXr|lH=EnK#S3>a*NuEfs+1p8sqG-0CgR2asN-h-_bgC6*rp(3u5`8c zi@7!9!;p4wyD>`WYLHirs#&!+^T${_dVzGf0>f@CwYcW3;A&alcCRz;o@91`?m!d` zuEYD1CxF0EFX|VJfuTV12x}`49?b?ek*J1o<`d4SKkWX^E+X?}xH4<~l$|hRei<9V zid;nI_pg)#ilx2g??ay3m2r;@I){QZ!x;ZkNks=^^;nbO1NtmUiHy+Z5?X<^lz7I9 z&%=!=e7#f1nwXWz4w!P-WA4(pP25_dI`DV#GxXrmiE|5 zLXpQp&l$`hyYJIRh@o!+4sDkuhF-d=4DI&2rY>)=^u?0Usmwrtv&*byO(92%fx_gW z&fhSAR!b~`%#9iTR+?n9J7=lZ*rIof9kxuE>%Mqo2r5`SH(WG-00L#>G<~c}$n|kP z#d&gi@ubO?OxO&TXB4l{v za=wp>6Q`~S2hKK_Nh@-dp(tg4lv)h2*EHF+rR)B&QF8z4W#7BLhWmm5mlKK+bJxKE zW=$sb?Ha7mIcSon7vqKdZvICk2&Hj!HDngq#8mu4LXGNASh48tml`5h4*wC0ZN^TX z!atup!!5+uKV<6M20AA#aez_{7l#9={#Pkq8PN3i#SLS?lpG=F zyJ5fp3H{L_DyW2ngwi13n81)6NFyPmV^SkTO2FrfKL0n*+lQBz-FweH_vGiC?``Y$ zf&r1P7#s$5L^IEa@KHI_!DaS5S+gr7tzsV_GH>N(Ti6+)s8=`d$xyc9vsN zz<59aOQ5P$%L+39jOcEBk74f0z*voA8vAOquGwyu$g#+z*wrgmRnwr4ld*R*0UtCX zpgw93#;b@=!&Xu9hy-hfC<_yLzd>Ug2_v1I@Vb*EOeFf};XS|2O0m_>9o{$)Ir)W&k(AEP+)8q{X zXPwCjGd&AE`HoXH7op{aE`PA;3!RuZn0#OhFe!@E=_dlKjuqqr60vN~m-MApdQ7O@ z^z4e>E-wum47ai#U(~U|1`u>x#%6P*TqWf%WG!25PeS!491fIkuNyv}a|j_CDc{dh z4JY!4Z@H0Hm%$^hUk7CGC_p8iA`YO3*5Lg!YC)bKm(ff83ZHuny-qBo3KWC|gck)` zat#*c@X;;}uU-~P zE6)hZt&b>^)+vpc5`&U}0?UI{M|`?@q80o*1(-FAh2OJd?QHV1b9)^^CrN zg-~km{etKCLQ^RDCKeXR|eYy}eewCG_4H zQm#gWII}o7i%KA&YTkd~%ZR$_lPRXuF)5%Z;`YdWkxg~KlvnB*`d;_+l0T*S?1`~j zlREDak)QoU*m4? zC=@2Rt}z#eRS}09vvKo z&y9Pik^XgvGPN^MZsOk&dDJ3tE-+*2&dVv0Q^Ie1p1s2@6@Wcwb}4T^L#6ZU`--)= zs#fz^KGp4tk{i~xjlJG146rxMtuf)bpmd&ofg+&$uT z_D>DVKV#~C+iwSM__hP8>d_uqi&o|~kUH+q!~Phy)gZ#}GpTuUk`||UavC*A>ws%j zB+6H2smPz>QThCrrzAjtQIja}@TP_gC2Xf=XcEtcukv|9E+#-UB=1;&v;Ar_({%<} zSg+4L;6itoT+w2@_%97uB#VO&K$MnD&l#dH*VXdWXCmy&xkDg6`}>H;CP*rg#g zuVGC`sm&jmJ-lGY!;?G-gc7vWNH61t2@p?MbI2*Mk^)lA5XEo<1f7h+hRRH>ar3OV z>jTq(08EwZg2SMUeugL|fFpEZ54}1d`mA=+r{SJw3Jp%axhF86!9|DneeK!)gt zfEX1KjXh@*=jgdD!)&P+^yhpQNsU$vKA+M+Z04}=9?rh^m;_DB@91ydc!ahmA9Rpt zH<43U2Y1vg>Z>jDoN7K_X8Ut=P*=r}y9b!lKKm{K5CCEHAk~)dd{=|*ojvVgwz=|X zNLL9zB>{~iCADuW!Odw2y?rA8zUT0{=7}1`%*zQ{T(i7W7;vN8iD3d)rv_cm+lzXf zXW&OluxfLd5`Y0=O)_I9h_SFeU-cxqN>2H?4UO{IAZ@?xiuM2aKKVOYQ69XF?{O5` ztI~d^Z2`D2DRouw9{o*ls~P({%{>%sO~m}!Y4;V)mg5}C!HAgBs~4BYEM7`C3?KVw1>h~3}Vy`_E?Tk^Vey3#TV#BO^@I!2X{`7D^ zwCy16$sBJ-p{Mp7E3zDlMRxfVt5hFS;F_B?3lt65e@Z{k`r&;qs<9+y2);&lRov$E zw+2G2QqJ+9VyK6%G5jk>xO3^tZe&;WKF4yivY)qeI`+O|coGu=QGD=~yrr75aDGf z-t75t4+TWUCIrEFxCj>R&QWQy`F3{eHJ-5%4n7)iq1$NLf3OD7NmpW>92Z;El?b^3 zs-6`_PeB&uq4RR~J3^K6*pUmiIhG?8;^5=mc>9_7(MhkNNG%GJ6y2Yg?mlFfS5jA~ z`WnC{IR1fhiUY;))Aq(H(GAXPRzJhP*-E322OzPYbCzOfz)53`WDW-6yrsip8nVA9 zMQOgeLYvK|Tt6}0B_keM)?@+}zpJRBun=%x3%bzp$yMg=RYu3$WpIGi<45U^ymy&- z-^bx{uf{RPv^!{8ZQ$$!nHHrOX7CVN5xE=DBm_0+i!PM@`(x;0=UwX`6YADlP|rvw z9EVkom2iaTVB;mf#*P$hDhAGv?1u!!eF%u!znCqdCuvDbI*{lWoV-Q6{}bSuSLloT z2@$~qc z6|(Fl>YLp+j_+n} z>8$dzL`s$i;SoZu>@0aKHcoz#xFIzb`1X6((Rw- z?N0H@)4sN&Qk?H2Wr&K|c2UGT&#yEGP;yYrYv$jFhhLfUV%OK<>AgidB?4;kvm%Tc zIG)!!+f3n3nUaUwt*6EB)=t-R2;sdf=57&bcw~R{`dsFH#4oP|MOPdLfw)_Ehrs`` zJlqfYaPz-q5!aZ!>-=RR?(UgI?FbgzolLOYSef| zzQ3C;x)W~p0&Ar(=t?%X@H&|oP@)Tvxl@blT86f^2@V~o1o#&oi7e)jLIL)fJwy2g z{hBv#I71q}g@=Fve2Wmmz5PmI;@yhZvi8c8WlkSSx=(9AB1*N0)m|LJ-|FAhMK(PE zzoV}0kHd3!KLq_c#OKU^tS67M(ukC{62_aIt?T+Y)_c+O*#(2S6nCr=IjXIr%agOo zA-TJb!<+gEWHWnY=Z5zbKK__xHQw#zL3+CX`PHnR-m|&h_VcU13nUEP;cd7lvVJyk zm|_Jx*v?$rA8A~#LyZK8jRsUEl6~Zh&$j8?G052aqz<-mix~2&l0_jEO|_tytg^?k z>cmj5kxvAF5o8bD7ynBvgOU3a3Z}S*wBNxm8FAdxcWdy&bG)RHOxlC_{}d&}LNxU?iN^JPgo5Ast7h$51wN@uw=*!R zBaI^a^3fvN#D;Lyf|Nea4v%u8mEmD73hrI}=vGa`oN0g#b?Fjaqbj%n<@}%%HtnyU zGsg{?eu(RQgW(k0LvRXBvH1m+|F&FC*+z5(ZdiU%xbGdg9W=;vwaGTmvgL!@FU{%2?bO~kjV&4Dsxv_7 z(gq@P@BeN6CB3YxAx>v`wtgin40U~$qOVOYmtVi`Jbi!QSIa4B&j}N{D(?7xI(V&l zE3^&&SmgP>0n;UsKR;A4D@*s0%w#8wL>m!W4dOHN46CyO&@Z+by1?h! zGrs*_Ue-m4qe#)Kuw?3g|&Fah$W6qGc*B5bK??YTo$S+*9th>$|h>yUp3# zzszPDT#j|^dvo(wIDbrFof3h!7vu`#hC&WK@cw_k&&r7GWt|1oQU2kbD6rwZ1IBpP z;wSKX(qhPs^ZlXZYN^U|*HEW}-L;IW4;Bi{?=3k^1P7w$WHFuz#2HgXng0 z3e%Hu|GA6+<&Ucos4?+}bUbIN9pRpylzpr1 z1UWdK6*pr=SaBcNY5UyazKHFfxKQRvR)bi$KbvT4)swfqQ`cTwQvH!cXGMPE*_=tN z*-XBhx)jK7OZ(>=9HqHJ{cJy7OmCsV)b#B93~u%(6Q}~{a@J7E=WF-V-Rp}Ty(g&l zoffIO_!Q59HQQ-{S^hv-ZS1k za5;-uwMg^Itf!HjnRS{Twol5*SJiG(HpClT*t&Mtx)-o7@ogDAh)91Pl_CTe13@i? zq1Jzm#;G(>X~GzB1MopaV=l1+9YHll_jDu;m~PGFtBOmMF$ZKVA-IRx!q3y2@ z-=qIJ3%&XxMyMWj13Nb;n++<&2YMzg9_AtMM&vz-Q~OKKf`qviR5$TaYW#8ke;@qD zKOU)iSB;`?1^A%kTdW5Xr|YL}ZmMgSWjK2lTz+`P6Cs_d)lZj>lBUVHj~KO^d~)~r z*9%iZlkVDB@oz`eQ~DnPUX1B!Qh`xQ;)JLDJa1zs!bW{RdCs4NEGFJiA=|FClW5qR zZeaYV_C0ws_un~DMU2BAS=h7+V64>VT(4=$k5qoW$jV6G>NG`0%Ep7D^yXNNp*vT` z%gdd3Gn6k)?2ip9G?u;9T28@Yint1AgyNH*EfkukGK_6Tlh0+oxek8OuJ-+@KB@{wUr*-JHmv-IV?Nch zB-!9-&;(t@1{x7rpVtzNX~V8;?^1gWLqb+}@lSP~%LLu~$7Aj&i0b7psVZkxtOaC^ zCwc-G3w6=)Xixw(XER#@{&*}C$MeA5NLW^zh6-lgEVMhFGxx&XswlJnyvoysPr72x z>(2a)!{ZckAmqd(bf-~jn-K+Rfoj6CCo}Q9LZxJ!B*0=| zfdH&iK~e#T-ZCn!IM}owQq>)in{f$i4A4}Qxh}G*en=6vDeG|Tv}Ny;_5W$<*WLEi zK9z)of=&~m`~XDikW3EBZ3pE;(3olRVj5B0I^AU5`b-r!#!@AyUdRS#SPIb_mG%9; z5#R-8XU?jO1vEj8+ix(AO@`1<=O1oJLnZT4d-FBs$(y2FwRqD*i_2~eg-zMJkXLFk z7y>a309%f{R5G4QZgo9^=c2 zv6RLrE?-}_RWmo!DdSha?~88BTj^A3hD{?g>slJxuQ{@Sh zfJcdcVW)3xnIB}|&w5j2`;Ae8@!_bq#=(jdpRQqTV87^F^k#TwQ@_kxj__$Wv?bMy zH;-ko&h{H}km#XbU0??id7?T46{Wi!5HmCO#YC#5X-9b#;xY>NDm983*U3Thsn1L_ z%(br+dzM<-Ok?NF>X*D6BFZ;8T&83Pzc6g3*(b6l3Pv)*Dq>F39b; z9E;!PXF-a2UPf%R3(xV1`eQY7Q)_hJEQ+UEK9x~xk8=@9a#6GNKsg((ZF)HWGR>D= zbQ*J*^1+quyhiF5NgfT&W;+ZEzZ{EgyvvNTp1yi=Pw}WQ+}VEHwg8|?mPH&R^4|qj zzVW}A6PnGm@qgR^bL3mampb8A!Imidy5hQh-M0ncy<$83&&CvVQKHc%rcc5OeLA}A z;`!&H@K;YaGAa^%Cq)^(TX!iiWn-uFxu~1)02HGbzIxJ+4D#aSndH#*tRIx1Z055e zBh&jC7q-^NO{gf&$F6z>0-eXl83;n-IzpgP3^1mD{%O?V|IrM+zkNV)MB$r}*#ucQjyau~ee6!Rdx09T9+3&j-&9MRJ zTb}@nyQ^7z3GB{5MUyZ@a9!#SE~vQnz2W&}7>OM_?HT zcZGI-`LHm`kbr`n=lOD9^3x7cW^D?9{w`Xb(4}=sYgPS7nP|9S5yZ3%IPGO_ERuro zG*zkSBJJHojMxeKw2#2AGi^*-#-7(q=-H)O8O`)|BkwgHb&sP~E`Ie{XxF~8m+IS? z#?HcSN(H58MvPEEKx0>Nj`RwUuj@5`6g1wt02??0(FrhIV@wEjv+b=|EtK@#wx?Fq z*Va<0!<_fkO?9=u_W)`A>_>}+M#8zR;U44T`P%Ne&nKxQpaP$^GcT>F@4-~-jenZ9 z+`wVZ0Z+qFXD8by8E5i2jar06j}^g&YIe|CX%3=pBhpr2`sm z{%#US2LP{OviXSrE$IgASw(*RI{)7VNn!N=1O5Ne7zr5z`jM=u6oAg8e> literal 0 HcmV?d00001 diff --git a/docs/images/kafka/schemaregistry/schemaregistry-ui-apicurio.png b/docs/images/kafka/schemaregistry/schemaregistry-ui-apicurio.png new file mode 100644 index 0000000000000000000000000000000000000000..2db7d13c7e04f65d375c6c4cc4c2ccf04f89fa85 GIT binary patch literal 57790 zcmdqJc{tR4`#-F@w2aUuMN(H+B|;@+53S0UWo$!|B@Dulb(GMCN=UM29b>G6VN6j8 z+09tS7_tn88QIO|_jZ5py6)d~e~;%lp6AbJ4*vKIpU=FP^L?JL^L4(?d1_>ERS+Tu z;p5{IyngM{Ek3?&NIt$TFSc(5@94V(;Q08$_^w~lHho|_L-BuWI==Sx{$)YGOV`ev z)w^@-?9&5^ejg+Iw2h5B!}4xl{}}mXyY{8sCF#beHtTsp@_e=5h+W*Z$?`*$=_Zj~ zFJVWTm}{-v`lRo5&QAVL&inlmnLe_|oql~*$z{0c>$18_x^L@ z8i6m+)|GO;H~B7K)t<{U&VsMak7umx`E~P~P-@6Z3fdMwV2-wOlrb;#T6C7}Rz< zJUk1|+hDXoL z%#6`B%Uy_ar_wmZ-9_?s!41@#-+x-~N@>u-82WI|^(YvrPeMb_=;Z1k%Q8V5D8fERdae!BFtXDYXnO-P`U}^arK2$5O%l^FAcTano7Bx~(+-b|! zon9yO-;H^^Bxr=yCnHORQx`b+< z|MUHpZSNGZ3Z-bm`p{gzbKGo#wMPY5>a&C8c*^X-xybERTVB2K6BFlJ9AdltI&7G;bl zl`FTs+R-eU#g^B2R)29)^z;yrfnw!jGza#-4=$DEbf>vf z+%Wwyver6{r=Ap*c~FvPlblm`pk-uF7`%6>9I|RJs>;kE=G}iv@zRC6^_OPLS`;5C zpB*YbKCb}_zOTq;)G&4F#mvc+o|-8FX3C`_{o$zmwqHXHYMdW$(H(KX_3WAigVmc~ zy7$2WX?c!*3t#tm1S7e;PjQ{|FjiCS;D8dzP>Xw)%UWvb{uwm%?i&8BY%Pn`eY^D&hu0&&V^ppFEP&+sZiV;G~3vLxY>>e$y zeBbA!+XWAe4;n$s>H2_II-KI7BNdX2A3v<*BZhpfQ|gh2eol|9_?flCl{_c?TnJfFcht4j!Z zHRjNi!LIN_)jms z#`9S80Bp#3JTzsP2_N7zxb_x}s`^$i=f|m^H5${k!(qypw$41`W67!rtJI~EeLT&@ z4&99LcQ-84)K+VBMy?(X=#e9EAiWYCV5hjY$=i6H062z@vc*Tjd|E4(R|v`#(c=D8 zpLNwJVM%*sYVdQlA$?_H#m`YKzn1VhXv()u{7f9C^_|AdqX!|OAa7PC7IW6lvIxv5H`D+b_?S1Mb3xQQwqQUWt!O%D)0Wwhsg?Ca+Mu@Qlf) zyE94<6|HrSk7%%Tm}y?loI+Jv=y3j#YNuA2*#==PZ*rj>VaUADVv2NpFK;@y%o)j; zxJg>uSI44Fk}EL+p#;wS4faOl8mzj{c<)1NYxSAAm^rtSpKEn_i)TnK%Vxkl%}kh* z;y)Ch>r?L7l53M#o+>xf<$4J)Wg zNRYWEslh^OsTyr7OEu;y`>k^=ck$5)kcR?$Igk|(p*wG$kosHiy4 z_Q6Pm5|YQ6RI+wzE5A&8A9s?&ExmP|T<7iI@or~pkXk5bb&_gb;p(UxxwEM(y5536 zsifuI`D(3!R{FLwSu6FV+${>`r6U+*U7wGXuh}&;<3}HS-=YR<(jmh~&wUle6)6xB zR~keNZXIz?(+rtyI+^ypENub_??}&Jm*MK<@r$KKx09tfu&Oy3QC@*hFCmpvZ%1gb z3|yR?s!JYbf&g`&(#G>+*Bs#^37>@Q<>=u?JmXI2&*xc|<&4U&tZezUEJ>AjzC1Gm z_SG!RUC1s-rwcq9de8!aAXmOF@jvQ-{t(`$x4PVO@atO{`s zx}uxXstv`)kh5FiC2wOaN*0&Ujq7Q;vQ8;%Mbn9*M0cAk+O1yrjfD+U_Liaawy`x zCy(Fhqgj~b8X!31|C)~bYEbj3+`f-I?kxKW|H(9)$vJ;oTZ(xlqt(OZ4l+emFD)Pk zw#fWh-{;hNIA*IG*sP315g~S?NNXCoHef-$<(43FL2`MmvqR;JF!FL*iDj7+!*-I1 z4_d*pmgra48Qtoir|T%S0Y0hYz!hym2vI}X5{!1%&H=-?yNw=kR30Q$QLAj(SotTw z5IW=;%%fptzpDGK`mncrZ-2w%urA$$F_NlZ=dUg-u8|wNN`SRWC(&;>S*iz(vVKJ1 ze*!cr z?Hc~l#NNFYj?0YQSfI(XvaAGGzUG4eb?boW7;A7J;Gpa*U0AQ2*&U;TDA z3YJa?G0rBQa@2N9pUZ{+6kv(n9!0X z{S+}|-o2-yyyn%Pqq75UnCDw5Hxf=GQktN4Ar;B%&`s#ggSUessD@gU0sdAi4Eq+^ zYH4z=v!r)zJaGY*i>dd3vq1<(2zz^-7DU4cycCZ-I6OCGG^EWUguRTgUwFhXphJIo z@Nbm!My@4MUYI|2>Q=(||A4b`QN~229Ps@BvamCxmRW@AW39Ejw_L>U3jIeoH1zZHlL|Y1T3>SdH?r4q zx^SR1c4PWRMyn_<<53$ob$3DCK>`xV24mA9k9!QdP=ycNM*T^4f8w5;H4=4<%_3)q z&#fR+gXga+%NOH9=p=l4hcS|pYjf5IiZ#{erSP3P6*#jP@wWs42voI%fCtW_dG~4 z2+I#a>|C!(s$V(|7ByZ!St(+3+Rbj8VqU;1#jUOR1*xxZ6YO8w(UdO_sw*=EN# z?W;+xb;Ss}IarXC5HP3>Gqf(b6XEFWoan;Z$vVxt2`C}|w9fUx?50g*EL=eLGjE)_ z&QsQwzihnk|NiOGlp9xUhI3BmN{vrU;BK30=RcxKA~~IJBqzqk`r1;|bjgY5|L3VJ zch(D5A?v%E-^_NnU6;hL{6}7RN8*2tv|yDu1bGO!nqk}>k!LKro?3`g$TG8zEmvOj z*D(&o@ZKulKPCq*3s%jI1-GF+s2hX;$dq;H%gVTIIMHdL<=*&9j|uFfkL>>IuDsXA zNkUUKzQ_G3ysyaVoMBu^@o*cw+!4YQYiKJoZ-sg?tdKX*DWBzNP&jy0x^Z8bD z(?Y#rMLh2r?42(V|5(oW@g3mSOC#{m_-3s`O^?6*W6i*&sgtzrsIBz*o-)-?5IJs0 ze8fR{^RFMx`D?whn?qf1;LgTnhn)KF@n&M`w|*QjgrRpX*_@#qLtGVIwq?43**(2< z6fRAvdwOQy`Tu)|Ha@KrYA?=Ga__sHtOSS18YZaPiyW5OC))%oY3jnW0JEXfa$_^d z%r9mZ*+pP^*oez~H~+cQ`^*;OU9Y+t7{rJ-_t~YssP>^*tuQEP$CdY{y>|XcIhRNDOO!7o&De+fVit*~ zkpU-Kv>H{#yEBpmPj~qwJRZ2yJ>=qsr!nWD4=TP#`nyQ>%Qaf3&hVNN{8QhHEZ=Jv zEp%LHoOv}d3;))2GbQSu8}qE_O6+0y%Aijy%%6gY;i-EJUO<+;FfC{WfeKv)%=2uy zy4G~?Y*+Skfgo@W;7t5VDsUI1gQ~uH6|Tf)bhlFe;hGN}v@ZNYqH{esW0~#EmDEsi ztF)LucOc`J2}>mFjB)m}(11A*h$9o2`Yf|A!=J)|TL|3X!)it%#b-(y50`~$8ut1hgnRaA`V*SLoQ9tn5!-Wna$}8ubdXmnbkV zaZ=!nC!JnDWJwyS&AdQ!Y7tZTTGuxtFP{^y(1OEGIr?yKDw~SKc~_6QPXv12?_#s# z9q48IE&N;vwX~+gLqh|;DIW9+O3|T~r|YXHjGwk${pYqOp@;B6HM_v+V))qS4F)ms z9+uR)5N@l&tDCM{h)%L6UP7c?@`1^jo+X9##R}dGx$+dI!0`~+@!QN*F zGG7w`Vn`Hd1{8}9z5CVAIjC@n+~OBeyPaq<)T#9yfD!c(0=o3&t=TJrV&}b9vO(Ig zJdMIwHO5HHwy80F|AkK(P~*phciqaiYh->gMc2EHhl2_cvjfD?)sd)4?`XFoxnYMG zm9O0Jai-|CBF=5BRI}Dq;|PL6;pO-OQbu~r9YpzZX#YUDyM1eNWrYDgm;6q>(k-i1 zl9y1?%}S8B=#5S2Pv|2~F$lTd6MmIS8Z;|=h0b~=p+2%Q=#!?^sC+Csdifp=OHz~R zr%4g?CW8NYfa7iTQ`P$;YQ-*_>3aig>A&baE^VdkaOG>mj>PrVgZk=_dnU8Cf~uif zV529`jb9Qw?l;|&A26R4tHtYsqimYA4DRm?(Q$N;0{^g%GmP~$yz+1v}1ah;y02sd79fAzO{SaJx?{Xmwa6E1a{O4l6#XZ> zulQ)TeUfwA`>A>xM9$Wh^SB}{8k4iOkVR)NPK_qW=+4I`K8Xov*NC2#r`g_iP6;W3 zA&3>glg)MbC3&+X+q|+CSuFyYaxN5IWWfD|k~!t=p=kgPci?{LjL?UK!VsF2#OrY< zB1rf0oC78^qMn`RL@Nt>)*ek;nkzs-(Q73wiYM@qr=Gxa;N|G$pRsX+9qplKP|eu0 zvaZ%dz41c#>SjONa5<%8ep2Iu?&60F<*l2fdnb3P=-tKmyIRNVIUZ|iw#qJ8njuaa z@Kj3wM9MqIqs%Ls$D0|NSMhSompzJYcl|u8ECYFt; zIvFGgx^y?=49G5!KcmW9#9bNhs4Z50yzk|HD=e(ik(^OP%XbE;wEOehrKuTUxBD&H zl_+rk6Sb#H+gp^5`dQFsGDvcz?TpKA_1f;K+*Tz6Izu&DGT^Uy;gBeX(ov8^?{5*~ zB_N&z^8my7{axjhn~%@Fudavf5OOHB>cmz=qIgj1w=}iS9t#Qj zbpbMN-2oq}b*tx;ht|B^ryesrf@D4W?b9Q26m$N$?|-2TQdE~Wm{SkCh`P$5Kvb5N zTff%&n&HaI&BS?0t*ob47n(fCSlM#*-68uUYDx~ie5aWfvTRANx}UXy{kUr&`+8Oc zg`(@JcXJ=7sQ4rZgj|gllSpx0O!llRSrgd*7~A@Wn#Rl3i6G$olTza<^Bn);L<@#4 zwQNO(+%K)%G7Rc1Ck~xZ*&gh$M*biJoXL|DG|koFa2UCc(3T5!`ozj|@AXgP+sdky zvMkL+Ck>j1+^drb#SaAwtg@|`Gdt$J;kV+X&8tbNOV!D;YHlgjgMJID)->KrH%n$u zS+kjSmC=P@Aq3AH8_2xdKk;q3n(owxs*tvf-(OQHHz!u-VsC~zUp&%Z)BU5hzaOh6 zYS6J(;}#JzP*x+@eyg0V=$PQR)+F(mp zpiN9f#?Wgf2I%n5+ax0s8dcHFQvtjpr!rwJ&Nr@QTE2F}6~E3eiZi7xnDWl7#;S1~ z58JNprYKka-PJZAUW0SWEeE9Md=w2kol@~G}09(YNa*||z6#l=r81$>&BcxYZ@BN(Kp<`g%0o%CTo6<%I&M$>jYretfpYrh(OG4a8O=a9$f z;B~>m`2ftj7PtT|^18lFvHCH?++u0%RXVX~sjP5C9Ph3}y3*2L^)XWTtxM-t(#&pB zQR_!a?{3q!)ARw3V((gkokvn_ozJ0BHDE zzN0yj%FE%lQ$6`{6`s@~1Xs?o^s}R$)&82+j}Ny{>i8887Cgr`GkHe0kEGbG22qwO z&~Cj9M*DM7c`WcrwqsTFnbFvjeelZO*sUEU4KPvc0(8Y-rNb_@=;vtT)dgZ-b-cpS z**R##{%xWw!{Ne_^|omQWX_sBMqY_SeDH(EOY-|70{u&bXYl?)XW$A$!YZ}s!MNwK zKw}W}Ry(}8VwXAN1Qro<4zQsvPG$w?%WAuSP7Hh;7%xnCJy=Eehc3_UH)q#^Q+LY| zWs_ZMCN2K>Gk+8zf%vISAVz%U(AnPG{Yu5QA}G^rtYVOJSE`4DnfC}YVQmZAg!WJg z&a+8AGbRfrCd+o2N?lZXI;TY8!l4OE9(%>#j)FX|cscuiyY7xWNZ1DiT`5 zmwxID{$cO_aKuxS*13cg$K?w2VJ^|gE($@|%7j@Q`Q^P9cf&1b{t2u_3QicNy7$8T zQ|Q6Op10G>B3yuq+dL?sPRpU3yA`Ur*U-M+4ms8vurP6%#ui*Xn2fWiu+hE9Zj7$D z{Yj^0xc<3riwMLpSxJMna%qHFA7R%YK(l+}42xq+mX9=C?Gv^8?nC*=tGIq&0NEHV zF4_u>w_^c(1-N-v%ib@bnyWOm&Bbm0UDx_#<1OehHLo9_l|Dk(PxH7O)Xyk4^6>72_ z>EDYxxCMO~9`RAoy~>(%`O^pnXGR_qwDLLs*t2^Y@20r8+j$ zNdLibTiAPE!Q>w%Z+b3!0@S>K#iwmRz;UN{drVs{(G_5q6=%wD38SqPln{_aG^*Mm zxd<1$=7Nc#hDb_-^6qgX$Q|xT#>ty+T`H;fofnx;>#0xEg6|+X-AU#(%lgKNF!OFu zKK+4Z&y+Qp)4t-&nOL>???-L==KR^qvzL`-766#ng^`k8zM6wR;;XvQ&aSBpx0~w4 zq}9l8L@O8AMY}>Qs6raLz;GL+Z^ctW%84n>iMI}gV7CJj8U>Ib(YRj;pSerjBvCW0 zhv=jE7n)bk6}MM6&@HR2n0@P+ma`_`<|@-O8{qH9Q%)LGdnqwFH35nF z0+nb%3l>4p+sUsoA!Ye7dWzv$<31I(OuAPeIT=_csFY!1O-~W=>3BTi`CuKk5M;T~ zwhn0275j_j3KtBXpd2MQjzI~uy}XH_RN1nb=*JD301dj2_i<9uf2PSWHvPlf>`yAR z^w?(|L6|meZEk+H6%RNB;Z~9NX)TyMM~AO5EoI+@kwgk=MtU`97pPxZEO1zji~8>t z^SNtCV1G9ML%ICwYwfL0!x-& zHx_ko$x?}vuO8du)|2;&y%a`3;&^Q8I&1xKl_SqDb4L>hO%C0;>lm~xnzM%kOVcL1 zRCyxTFVO^`SiDu%YhsLF&&0q7C$mCtw2;!Npt5m~w$g|C8_^%O?2GWgo)wg~&K8q! ze?g*64B)~lZ<`U<*k2|zq7FI;W`c5F3^3(KMMvjX4q7I|qV6+r2T4c0XDAY% zs^zv94cye#rVUKYoNggqsWI=%)ud;2TMszUPW7i zqWJP3Uv{)rIz~IXSxdOKK-hUcGC{{W)xZAZ4DO46cuT6_{UpVox&#%$?Qzu5wza zt3d$AXDzVwcJW-2$f=OLw&R(C0RkDbXr; z_vxm8V}3{UFU;>?m_wIVZrQ4Yax^Qib3cv>wp9=>Q|Yanyw%=@vXLt-bRQd#wvHFG zstZc2rAbL~%#PPZ4jz;!jBN9{co&u}ZJyP1u>~^0Z4GRkHbak*%nEL`4|VS9P=TBh zIn9{DQ!eyI5`I3aaPR*!6OEWw=}eetI;1?n8S%8N`No=%C0q9%nWUK9o5jQo2_2I? zzqwQ(?^vTd&tgon_1k?i{<($-{^;fimxnBd3>bfJFkLnUOp>9?P4m2Okl z=rb`(Z)*%wwtX*MYSO~~(q)CgULG{kbXUz=TYo9#nccJEEDL>GINDP+cCIGVNtWJEPnrB#&ZvK0 zpc0*|z6+Onsb@*tSJ?S=X~S)cxtsp`oz*!a(au8~b>R6BN5)_^`}C?{-WH2E7&UmI zwR6Y&rHGA`{2TJ@F3cWoR=V9Kmq}WKklJ?x&9%Eg?13|>_I4EIOeY@ZX+FH9tx(}9 z6SOKovVS1O(#LgC(*8+0NtFQ#y4z8=`FWSDR#wwSsl~-Ibl-jyN_mpcTog@+PPo@* z1z!MV!Wtn5WOCOLVo+iskITNgprcp}Lb~x^Bl6D@A1v?+IJ0-Sa}B~C2PT1ARKpN> zIO_>d(dW`J6<+}x=Gjoj9d9znvHuw^_ldmOJnY**y&DxQB=ue2aniZJ{9c^f;)~fC zvp&?B3)oR8*~J>U-gR-MFi&Teh=P(8du<#l+EA$V2mSgEoX7 zTlv-`vd3gz-Ona;_gr!E&J%uwbL2`6>4=ylv>GO5(D#bp?!<(5QR zBPX7~Ip^L!BDxlT9bCl<%tv5^^)$^&+KiN zTFSL3b^kmEmyLrk=gvmwau&M{i%>+n7)ohb8AW40G4U@j+>A$?Z~qsCD^>8oH0KdD ztX>~cdG3$J^-0^w)HjR6(Cx?-lbz)Mg?6o#8M=l}O1FGN(YV`&G^(DF9Qd> zSq9?xZooAg=&JvhF;|S3=w$T> z$pOLt%`-2ntWbn{*mX#a&c1uAp&l|V_@wYwu-(ekTKWQ zm64KdT0FRDxtK)4*fBM;)~j#q{aX>m$M;WZ?0*!)`2JsMYA@XW`r_b!6ltXTg;s%4 zdi-5h340y=yRZ)4{ePC|{(m0F|4(jwcuRzL7|FrRzA-VZP$)33Sdg#!CL86TjvPOeAim8fa6Y$@|ssIN2v3`t$i@KHDg^jW-Cvo zjI}MGo{7u~i%9pajkj30dXE>jbzZgRX2p~S#i(}`U6?qarc&BQQ2Kk|eZ6{@R#9?; zCI%{BTzm5!C{)JC#Pre@%H4c&rS;f>SoP(%W*FlJ2|EcqA>X3Y0Auv2K+@-hqUQWH zH{@gnY2~G^`<{%ldt^vvOk@e8FSF` z!p5UfefqE9T(>`dKRPTV-vX$$oJBZ8h+aGN0)4$agNc`f_{L@1XIfXTMZx^~9sTB; za;xk=D8@3)#+&=;jx7Uo;XVRuggNvp*AbGyFFV9mRa#dzteApM3P z2B-!LWWHRD{uY`j;q!v~?r8eo58k)sixS6dE{J65Y#Al0IuNEEuKUoVbgR(eJMgr7 z&Y(=?-N<$PJPri6Z~$90RVUMpijxatTi)DX*qnBMro_+YJ^k-f7KYF7izE=(Tqat( zA(Dg~P6uN=-TIR~FN3j~RYlmm{7?sxO0v=4~qa^^5KOB4nI^7H{YHu;SzAA0CO9cZIWO>5*g_4j&%4dplUcKa^W(a`{y>}Pv35#Cy2^n~ z-*nrvTYYL1QsUPKNpYV!uCg1oxJ``e726=szB!{jFpJ%sxnr@oj4Z`Q&n1VL5@ldv z$**sRQeSP8xGN#7(c_^uEv1$l9KaqeX&lH0rtc|jUOqbNyKMff!&`8T^ZY{ECcfo_ z4f$A`CfnT&h`T|8@ZAG9b%Na>$=fyXTW#>h+$&s%@OjBv5hwOb;LmFRKe-5703LkKRj+ZVZx3?^tfC8hd@CaHGx&Ds19sZeO$s zVJ(&v!D{|hIqSSCiFkM`UfRs2V@y&$72GbE- zBs#lH_N+Xa7TjeUvSj8j4`@0#5RUtBIHCC?FQyaKRZ4f0S<&YrNP5Abn>X z=713CN1LmZ{dO6cvwYoZCOa4>#P9jE4b7a3QDb=ZzMD*StBxU}t(AjDB1C7H$n_dn z$P&@7>-x#;o~8gYyt4m>Jbo-@>8rSBA5b?s61cOTx@3_tD1j9Z^ZLZkz#*u1p=*wG zP?oJ>;{0)KP}_hn1LgNdHQAt!I8#K_GB`2C<~C~aUd5w>8KlWwa#^f~(xT-#?Tj%IsExg3@Z3Wn0BqeL zC!YMc+c+bHu_-8~2D19!^ zl(k!dymrQ)z=ln})#OK6T3RwLfHn}hbp7!_k+&NnWw}-JCn|n37Hm*eodI&dvb0cE zli%!sBWUr4Bf)<>=>Tb=ovAs1whaV?Y z?r_SqCtzvKSHcs&l#t@nrjjvxX@w31+a;hrP#q+Wc@$K{3;=~oB|rg)pLcnw!CmZ@ z`>3yEVKygj(+iZKv!EhbqQ(0en=!jbW&DlFpu!4BYih#$fhc^Vd&R#{)*4&IE(*$} z4;I^p%L2+8bYo0zo&)lqnvX1H!kT`@=A}ncRnV%W$-@PUq3+Z=3M{!O==6-YfX|*kYZry+3verxe-o<`av`ZXGSK z8*PlORAbgElpCJwX+;?+OGN}iPKc;JBb}NdgY-*absF0^Z< z(GE@bnN=1NVCvwcmI^o$;ke*6?~U#+wMZSUh#*j+k%c_0kW<<@SQ}6#a10ojcJDz6 zaXy(h@3fG0WVvil}|*(&Z%X z4#wJsR$HU@tjJ)&zBam=QLY=PK*vGCfM^#k`V!@P8u3_qak=t24J0}KNAn-K!eSW} z(pAjB{fh_p6-o2OaDRqjEh}8(1rJ^5K-`m=llvah+V1ws-@5&tSKWj5*^LZ=0iW?f zvC-*vlpX?*Dd{%KPA!JNII0_2#k{m~nHdp!s8ktfBGFVFf<7rMy~qHa4pQZR`~AQ# zavVH^MMsZ5PtnQnD2=RLsneiLNOq1Ys{PpR2IK)Alg{flnHkAUwfSp70C5Z!cZ60Y z7eZSSDg>Gng(b9LVpu?Ac6PozyM5FYQq%L4?q7LAw?RbaFO#XN4f(vTx)$(|a^=K8 zE{sz5e7|h--r{M(bSa)*R5P#PZ{J{t7?FNyR_QaNBJxDq*R;6$_NUFsnjts=Phvm; zJFD0b#X zxWB)j(J0IkTG|p_uil$~%fGHpR0C50N)`ja_7DFZ1{GRWxMr}6R30d*mv46C|0GQj=6?u@B5o?Mf5=@BF$eq9VcGsB zGLNLOlaaOAnF4KfL#rT((TCJ4SwqB4h5k8br~&u0+cOPgd-aar zZJkn@AOxYR1KGld_CXMFMBW;0_Sri{N=aySRHWZ>``!z3IzGHQTWq1ee%1n1S4+$H z^X`n%T-}9oXB3bWH;xD-;?7O+dRa=x#;TU%~NtmCFto`}WEs6pCb9(fzNA$cN&bhTw>jg&>NPeDNF($1-&t=UWi3fS&o9~VOXe1n~a!_$dZq>3pv zN&WLDP3ed(u{}1r&)MuFh`$Bf`y|0>hlA7ouhhy&*)mqylc95Qy^6nj<=Zj%+^20( zAZ|yiP99OVD@G3bN?kMcM?bnCxKe1e2+&3O25zWZ=LLzD7ur;J@c?v*T$a5iJ%W+4 zF;gCIltj+pLe1Vi_nRC3BzJqEGSRfbj{n7Uk7-_^TbDD5C)NzzWLLTgt$I;pvm3(- z=0AhG31&)p4&=G~AE&F#ybgXxZ$Of&Gu>1r^MVoP8ZC_(nJiQXr6l*4IoBAtQd;!l zr6ZP~UOk!~`aYd0F9oZaNNqLkc&E22!?Ilp-%Mc-Z`;c{8=-@UAn}&3N&KzS{&E$> zD6(d^+`h@2AG~dcck+ieRpkW-1dVj)>H^RAHQ|&4T3GG0?_hn`%XY|dkz(mTh%a#v z7m=^|3kyIYaX%v~a*|k^Yjqf-pzzAHff}ZhdSuY#THX;wzih64#m@B1pT9Mw*Zr`f~S?P;6azJ@oN? z(Dc&pm;4ZPSHn9e(8%jO6qQnQkBqj&)8AI3bpP@7mC|!w)`2 z96y=j`x+>^WyH2`)ZO{tLgF~1f8FYvTiK8Rf^uLr_k1DdQ5C?y=gU(*?BNyAq4qAj zj^r;UZ3ypP@Dpxz)!Vf?vjLc|PB+PZX};1)l`{thTaR~63n9T#UCzc))sM80cm@VtD`| zlxx#lp~_eq%D-Vn{qo^@Tpp0X#|1G5-wI_>V@#EJ8M6_ zE9(_T#Klj6vgAD=K9AXY60)CoG!FqW?6J(P_A(P!a`)1XjC5R%jk^XyIDd!O9*gXb zOO!WB+mXb8-ZJNQm&Uvp^wDf$bfW9Zy-+P%E-BGA$9YtV5sf$K{W=Bw7Qm zb>Wc7(TX<7KNygF5i2$=hGILCuX9J^82Bf}Yjgd+n8ojk?tLAUy|buv%l4v#+*_)N zk>TbCByFNXi!+Zv?Qw0WwslZLAOqnolM%8DOZVaL&8C@YK>|Pyl$i!?S@O)BII*JT zrmmWco2*ESVe{C`8UrWf?~FH(l0?g0=(pJS(HBXvGuBwss*~MqAl}%y>)z@WJKw^G}TUpbbphSqG=u5+6N^yp%T_9SR8ZoYqU>qKFx$C zkSYVu+U5KC$!?Daj;(8G$*9kVtryFk61~ybv8%0wSzafON8;M0 z5qo6@iCkh$ro(hlw|@>2<|88I4w+-V5A_#iYBg_r81;squt5bNtNw$|hEVrbQ=nQoZ#My@esngIl zyePzJsYvQ(&zYWSPLI)7-ZF?$N2aTVHd{p0i9xJa^r+RLd0v1RsP7e!1SN9mZ(Yzx+BO=@jN=5114mrWk&`!@Eo!02y)u?^3u*ea?b`1eZ-oNtUREOL> zJl9ZINZpY_Vp(4MeIQM%aWiTxE6N3|2E&KmxJ{nG@m{=oSo;Z@W~2+H*O=4X&)luq z+%E^iRb^ys88D!5r^RBpo5Cb{(qklNvtblFOUPR=8eNP9s1md;+R)OFb>jDLe+=8y z$!EPTHx+Vfa@!e5IyHbMQ5ma8eHdfe5&A37eDg=PG)Ll$OED#g8Y1_0F4p=bU<615 z5`GpNaQX!BLh93-O-~iy;>-H@r=NI4Jmks01kQdCuO2QP&^Q2*2yQ8y^Z4=kX(ykF zt|qGcY^PUuQzU_PI#zxDTIyOTk&+wMMpc4+QQ?@aejv#7nHc3%KOk!7cd6{VmA{~B z>vB*ANHw$z>j-`9q6{AaHLB;uhpC&#j(jjXb;Qi&pxT(H{l0xXX^Wn+0u_YFF4q`K zM5(k*k48@&;Fq4NX0Z2PRFNi% zfDY}+=s5epS#hQ~bS#6Zw!GGzCX!CCF`6svs>$IqZ-0ffxehd93OO&KbcEJEo%MyZ z^%rV3rKt)U0wBzLw+CkWppIEkZ=Bh-HdyULnFD>yKLMbXUOn3a;@?E*zCFG!^Z~`2 z<;$Hu`@T0Gstpf3+@n|+lxX)7bzAKK&Ndrqr}(}6KjODBh~MK=B=`(xLH^pk2mPae zdT32cZRCX;+2c78o^*MK&3j0sDa5pADaWXpiUqYbCD3s1ayv6jU*c&~@Um&WU4*gf zx4@`;KV}#Eom)}KUV{rq z%R{6&b5O0YDh7_8xni^l70PE!2n`W+bxpO5e`G^>o}n9so8^u^(|V6f-5s&WGFTAU z+YabT(F2}?Rh_fBv>Q}3Aw|_$WEhoY2J+{;s-yQW`gFKnbBwR=NK$aV5@ox5T%0#u z(2JDyLu-&^7h@I0OYC+;LYD`5#AYrYib5X13K|c z1%%h5o%d1iXar7ZG~yuSs>tiD*=J?|w|MQq*ROiVIc8PaDp88tTsn>y7<>lwC+^Jg z&JY8@ebcgVHV$;4JV8(}JnWS8#yV&53O}AnsJ|c>fMVYI^oEC8GZk5s8CZ z^hR=qdM`0AZTQTr(9gTyq+&p%&0WIZRpppnEJAuNGA%NZ)|;n!M8d;Q3w7S(dMg?y zS;|Je1+z;xC)GOqvoosV0x-XPhSUx{*s9th*PeuClS$>43N;{?TM$$XH7Us?o`$yB z%36~E8R<~=>)fZkQJ41Ezr7v_H0vWYrrea@hsa{O1BeWJf}&l{D36S5!(>@7SAfcn za#lfvH~sGXHt!Czq9aymD%-2MZU$F?lav?*-!}&NHb&%nB<&Gb1^pl3BE>;@|)jvd5Uj znQNGBP2+yo)R_BN{)l=Ls2^XQH$|%onL!LhCSCpydv6}iX4}4vb{@2J#8Zk6k5(x< z7+PanMN83|2|`tsh}0BCNn1rr(Nv#Rsl9yFLNE0x5DKQlQ2Hu(Ed-_cu1{73$?oomz z)8`!42D-aaPFle?5;q|tlf?$3$4mQk{wTUIJn8`OsU??BjU=mfY&@(R=kGC9*B$vx z$d=qO)4FM^{`svyrBTvlCaiw}8F3+G#x_(wa{0(J+maHm%t`x=rHF<{o80tefxhYI zG(?CCum$D#vwcG1PP|_tt37~WiLpt8-ihKinh3N`N;FyoG~y~gtTArB$}~NsytPe6 z>|)QJW56pIZ>-FH3*_K)<{o8ZE8g2aJhFe0L{c^4c)2w!d=PpV2$ zv9C7WV>q%+;o--~w6ABn8UZba5TWvxoO53ByRIBO0(9Jd+KPCNmo#1|*+zYgTql6V zl=E1}qgI&dR}RLU5<8B~nXt^c5b&O&KD<|bc#iuk#R}fs@HGv{RI-Rre7rd@{2u{D zA@qUYxe@FDIg*%<j1EkdP9--Z@QLz{{!Ha1ywkeZ{b_4gg-0>4K{9(L zd#jMv+ZTXhPu`igR)F98SmqzNwMa%ncWTOEN4dAH;f|YJ;>2Nlhp5Y!{ebyTl_i*p z;Jlut5Dra$wPD(Antn>$LBX$pbGf-aSysC<1kTKG#0Qt|YU*Q|{9WpMcj45)80X_Y zg`v@WPF>g|1(@imaDqg{EdU{iPP!^2Z=D&L07wt)z1sZKx6>@OX=g~ZZ>~d()J*RS zQ)-3i<@a?!dE}*L8W0Vgwr(8(2t6)JuTxXK0Hv{jd?!FVfwf+C8gs4O_~B1kk+Q)0 zUh!UKr4T(DoC3{<_7wuTSZBo;rqgCQ`E5WGsLi#NaGqCXjZ2$Uep34aM{a7@6!5B=F2D zditJGN031y>uHt#jz;g)iEuk5Igb2{&_)_ICn~hT=ZV#8`UTBS+FdGBxVXhgO~plC z)Avo*E=NusJ6Wl^VY(=5gs<>pU-`{dAT5BeNv)9N59FK2XC5=t`ou9EO17i0?*VAMpDftNGD7)j&oaj0ol&5dQJED_?g- zK!^~vUHp)*9Kc?vq2Ho+UMN5cj|B?M_N5;VdZ7k3t|xnz9z>m0!A6 zc;;7vdgJ}aok3mt={Es>4|T4hqHxYQKM|4SxIS}X8mCcL@*izF5YlpLKiwPr4MW0`e50O1$K zFGvE3o0ng*Uz6Cfu$h^WamM3G{nx8>1dt;%CcKli5842cfF;W8)oWkHL_K;xUIMb; z7}CH$(Q*`s7~kDDtc4TOXBH)B>cX2v0IjR%XrAU5{Dejq%<{qC18(--`jBXuW%!9K zC_u@6T`I5Y!~tXlO{@C>Ks6a29wP=1;LXGNw@jVZqW9oV`M@_89{{r0I>CH9wYyC$ z+Ez?(@R?Hix8y6AW3(y?N3AYji}J`)5H&9Lnxm^obTv636%~7Qux;B|-+Eyy)xqsM z_x!=wE8T#Wmxj(&Osu25sn>gVm3<*9kdnJ0UfT*s)p`W^z^1v8dUESCvC&|wgIgMb zuT#6XP_qQ!3BFNN?)N-WCsHl6gE8yu)DekIaQ$Muwg?{c3b5#^+QBQzZ+18RIP3X! zy(`d&vdzeD`@-@}!~p*RkV_Wd$ms(BX1A%sVlYhy+!f(y-E(oLrMjyKiP+nNd79PO ziH}Rozt$nzjaxs7nJlrHhk>4l=Mvjdy&+SFX8MYg`P_awINoTbjC;-S!j4>Vs;*Er z6o8}amjF6nNRZL?$y+D!xqVH7slTJXM?cGfJvMWk3k2394TUo^ZR&b@YCnI8NzP6b zB;(xT4&UzU-SMaoK%?}^pInOZs_?k&VSYLPiA2n8u~Wi@^(F?%C`cp4eadO;_2#%e zp+}vg=pM*}O%ifC8Z!Q;W^L}LDm}Hh8kCQNkknR(g9)n*&vkYJS$=f>M`=LVOg|3q zH~^i6G0g!SVXi5&N=qr`r&z|)jaAACBf0TVT39jju3brKZ#P-2O$gm}Jax|DbM8d( zVC0@BQvj?|?@dJ%E$UD4Na3lpogpzQ2Rqe+Ez;11Koc%H!Jg}AIMm(!1%P;^k+I8z z5h|;l8Uj4d&YKt;P0}NX&U#8}2@(svExVpKIfi`qrTu(&a8w(pHM2tw#?3kOPoqY4 zN&x7n{-Y~BpPip8E9AeAC^!S^LazOdU~RbBn~I;~#z=#!A!veD@W@q-gNwg`jNQoe zum?xs;r;atQ3vpIM?*}u72dxxhQf!YoV`DKN%q{GXS)PX<}T>!Qbfx&M8(~|<3THB zXY3ZlAkRkM@f~u~tqgb{b`_qJ?7jCE)Da}RdkuNSazRE#&s^h!zn=EU{b+G{y$h?u zix)$L&(Y@rxukxd(wE9gF;7-$8pUWLwA{9d4!9oD))y6kwpM8Q6_0hm(P?mw@%i_` zyEBIdlFthMHKvO$Sb!I(v-U9;^ZdX`Jm)x$d4B5Az)dFpZYsVB;DMxZLf@Pw#AR;3 zICW)^kK!RTIcQPY(LBlUHV};9wH}dD16t+$cd+i{w%>S4tT;G@TyX2MnlqS@(bS7< zfJ#bj2L@prAkG}CcGb(+jfmc>Q|m@ZQFX4qPbK2>K3`m9PtzpZVm{U{wj;Runy*SV!F_* zXf%YHyxy)0;AXkDb4qv^u$d$yb)2Gm599F;-{p&s0Bd&jp;+#~Wx3#EkRh**vnY?Z zo;3r>yPxpjb~9oQTx{;Hwc1jh$?Tx{;P_-fv1o>b;h@zsywOK8BoM}9RgLYV!!}A~ zq5X8rF{DViqX|rfY-IyU^__Eq0l_Z-R0}Yi+CgKjlSlx3in>SU$*AjbF{NNG_n$CA zL+M|rQn?#}@;9n<-1P81plD?Wt{S(7{XFDaMn}=b?G=XBU z0G)w-coh?i9LUCe{~;`!bEFnjq=G*w1NgSF+*dsAovhvGLiQ`7D=<^Rw`yrb#M-?L z#ciFP@X1@x*f53UMp^MCU7$9$=FRbcSBB@Dr}tUjm11d}hl5b|t`4Uf2MEtwe=G%7 zr|F(m=|stBV|9X!lY&lG_SHVs-5E19=s5Hm>bBeD$0i)Nhd(fbtm74u)(4iJZ$1R+lykBRoN8R{<1cM9xmurL$4P-56^)fRo!Jn-5`3 z+}!2N-yfT3u!OCyu08IFTkpA9a`**;z114_fvz&@lgKocsg;J@mN7L z7Z=d3Xm+k0oDdnyifR;1wlwbbESN-f*Yr#+2lYe^DrPoj?!B_(=+nP{J->3yPNWva z4x=t4!oHw_j_?kg)LIIv?U%cU#tN{nf`L-PROSg1+rn3o?B2dpMRS*!7@i&G6O)*F zRx7Z_DIid{^PB@!8JN^Et*v;(9+2$D5i%K&U)b%RQoFm~QR_qN z^T72a3^Y15O>3#r;-o)#)5ra~4F2gQ5Mkw_q}w>bOf@AgUw}QP(YB^WgAOBt26({iS$2cWiy6HU*MoJHjH4UK?E zLH*V=L)EHOvR37A+o!qtbz{MQpq>JOt065WAJqV^g7AqcZ0qc$HgJ8~to#^Ue^^>MO&!d(G=;W1e~0yX zP^6XoGDa7v%Uk0(4<~g4!0~g*8L70-rOnWKmQHmqHHidt9goqxNjSsWq;+F%Ek~hW zYl5gEP#V>dAAOKQYdVc$cDnxL{rBoScjE-up=AmI3cXsLn2lSQP}t+y`bA8q-d^Z= zTM;%_3pxa8rN;k6tQI2zlg;0=WX;fpz%82YEm&&w5M4pm2T}lTrmVE6NA(B~JB|5p zJX$~JnNS-KyL;8-M93CG+8r|>HfH?RrrfGc=NiP98M{>yp>lA-FrM{*{OnHsHdew` zoLUy1npB`anOubB)&%*013$jk)LV^w2Q21Zd-Q5dNsbNw_0!1dFKzC+CzeHU|3qj3 zl|pU<$=b=f<+UW`CGM{2X%tCo%{mVKu2h#8(m`Frt&yd7KcVE#&Xed;6lsLQiyR|9 z&v$F^`;5K+<|>aqeOdB!Ui&4}Nwiv+n`p9VTI>{XoAEL=e2lSOM%Kx@1!x}{MnL}r zKag>jUAm{3n|ciQ13vvi*Qa?#i;40tBrz))b+it5&miJ==Ylzl^VyY_lQEw|r&OW}JF)P08!#yhW^1Zm% zY(OKS*HAQWxO{OS=nNcXxA`cgZ)mZl$6kF*pz2Og6 z3#<3{)97N5HF+GaYuQO$`Yqtjs&rMWz%yj;&nAj-+;)XXKbu@ifF45B5CP>+4lH-z zI+7A2H-2gZQRG!@`8vg-5YGY*D!YcY+f4=jVm8R<_+WG^1r3eJF?6QmTG^_H-PYW1 z8!~4dR}3QrdC~BwS;GFvyZ`6rc0T39EujKDJ!_r2n%3h`OnCaCPFUm$~TkZSd#V|JwQ~L{q0)B9u^Hskx1GoCGHW9E&adurJX-Q4 zhY708r-m=q> zg6mKUsvFmYBKnjzS5)?z_<+v(7}umevY#XKL1U2CFY%Tti?eTmc<{Tb$M#A?qvxp~ z?6WCnKhF2IfmAdAL&sZOhBA<@=e4|QTvKp}F?ExbimsB#-105B$m;1~A+sszF+nud zQ^3{)sy?EARWCqGiT2dSbsd#tD&0 z-_G;mk5D3?LNh1kR!=YI!of_SZhH?1N!i#tn5xLi$f`;TI;1{nR{pav^6{~^>yeay z02Dm*aokh!qbp!wKm2`_f#^I7N*X4T1z`tNb8E(L8)BN6ayMpo9fCKo9&RP9_6LU{ey&>Noqm$i0&dM38IPL7Ug@GWw8 zhtBkyF7*=2lT_f_Ul-r)8U!;YTTwypPOZ**W5?)%tlHb5M@soWyceSpYn<`H>pPBW zi5WcD|g$^825H2@h-f)YlMW{;LTl0!2x^_?H?RFgt-T(aB zQs8U3G+304q4S)dO9cfSRA2COu<5yF}6DHtFzo=B9JbC{L*vrAZyD8>j|*Z!!n zyt3nzni{R#262NcgqsGI+XL%FKz;~)}&YiLLXy_J{t;;*W zuh@KG7+9eQ2l1U^G}`LD9^+O&cYc5?2KIl^2~d!qJNRKngXr-9%bfdYDPXKDZ?UWZqdo}T={ zL3=2@(RDq-HZ3DR8wF4q2ta|?i4xT6dR0u%&IYd`EG!Zi7Jie9amnGcw00mKtOeq; zDS2((y8xFgh|@Ac7pCs~cO$=4`ct(XEStZ?P)9gb<>eNnBk)@2xg}2<{j~?Tup&xL zW8uh%i|~7%$Y%t)RbjFFSZ!>7qDb^cD}XBOjGX z82#yt<~ycOB1Dr2b`m)fP`nzs)d-x>(f{hW>BzYulM?6ekcyw&#pyns`=gI^ zH`P{FTJO=Dd*n|P!b6?M=fa<-8}h3R`$8KR9Q{x;bb71(CeSS>YwP!bCY%q@tIgJbG})DhnV%2rczf2!aOLx! zMPMBJ2;1)~BRY*00VeZmn?vJId2^K?-d(vUK=8{H@mm_Pujcf>Pl~ZAjzwOUW|NEi;?~myr z#lPV_K<)Hv!G8(hZg{lg-T!i*Y9$!3EMJ9hcJ}sZfeMaggL8eG`?|MJFIRr~y@>u!(tZoDui9y+f*wwX} zwQJPX-q>S!3tcx*R$U=_s}>B%Ro?5LH#OI&^Ns6#nk)0{JVnZP-pI?j5_4S8-`!Ae z*{7h~v$)jS)b-)(QWUXzFn`Z7%#IagmnnK0Bs`ctHP-8tN$9SNm9d<$e(?MA%!RT@ zUD>6}Q)A|>@G?=(r;tUOR0o_EF@r2(xBg?nV}C9&IDB(EjT7b4|uw89Pv+cWBD$#*500-K*J_m+&-C%fmpW;-w zioMup`P?-*(a!#9J3WbuL-v}D{B&~uSa;nU{Z#4#;wmUNi;)_tB3o|?vlfM;x%|S? zq@jSff<y?8OA?q#h1Pm;snseqPv|!(Fts+D@fr`S^2D9D*Ks2I;FDqQ_e4j7wpE>H zpyo2#S~!>2z~36GcoJ-<1%V{{13^C!~ExP<5SJ7byYW`SFNKbCkU^7~1`V647Wi$coCRqMK!vHA=g~ z^3GmtJ9OSYEL6I*9|0|TW|6F0_prcn@w^--BQ_B!|QlLtsXc zWfo>@S?JVM%csE2s-B)+znm#B*p6Il`1$NL zPlYdS^sTxPrEpyk7Dka^jYv0gIa}qZyh^DbIgIS+p{2?@IWzQKF6tZ+TpqzCB$}1}a-#A>jURb4UdzRp6%yTQvj^J2t zg&B?DB8ui)@0^0%$Q4t_ERA<6%p6tADhTL|z+PlR9II>#W9s~0>3ifZG(u$VjC+EZ ze!dXz9NvKj?kP?bG%C!ft@ypA6y}j?&#d^#W~(LjTRcjDH6=G^0v+TVmI#~~)Hfq7 z<%>_f-NJAd*@vE;B69mrW;4tu%K%h#0EO7p&ACf(3FG|zbG~bh+RyJQJ)1eB^k4=yX1ir{GLD`dktIjWHW)Rc&8adQI z+{GH)?piqNCFf$tPAlq@M)%H$!*|=?V;``;+u1uWJfx9{^(x>RKQWzz@F0mg0Y_bm zPXt~TS057TPIeRAw0xJ4Lz*|wfy?%^EU(R9*A(~`NDT#?axEFKKc<~CkWoY>&p%1+ z?EDSJ*-_w)r~}KdX>D{<@6y?wla3CUzBE@@i3poc@MO7UmC?ohU`&4wA->qhzoXrm zFbHU1ss+*NUHYJuO(aT8tYoIlkO;q|UWOl0DP(2-@$J?>F8+`5e!v;ROZ!c}(av4Z z67c8Ao?)Gyb;d~H$JSmqURG*hwjjfI2d&kCaDzWmAV|Ef!CJ`8knwJ}aA8VSyQ1ao zmsSqLX6_P&1ae8dE^0;fW=qy%F8=3f(3C1jGHc>u6nWBKl|XWu7`zP?_g_0Hp6SDE zXbICZVrz!qFH9{aSPf8$w*atU(H_wB(iP9-Tgh@9R`({A4l{?6ql43{49>6K7BU>k z7wTHJq~+kuwhPDqP;o3)3oDp>^En4aw)RnUO~qxi>t-*tgR|Pg%}0eDFWcyyDP%`~ zCd!&x|9O77QUa65IjkTnC!3|qpQj*kJ?H`b;w$V6jKtu`+SSBvrQzVg`~yVfEho|r z>NyKlaEW4B&ArJ+&)?kFX_Cd7{sGSR3$dMBOuy$w84dJhhv;D~Z5s32Gb zd#$AlMRI(H(jRD5Zzm2)tweND1?Ii9Qu##384JzldPTD#qH>(w@X|*!{*z*I-U&i~ zm_WwB`5$WtuRHN5nr+;Cl)g!W){U+p(D)V{lJUIOWE}(i zph>h21xtr?>U7HAYDUYEy=_)w54)akm#cFMJ-vLX_5~{uEHaQq>>(dbC34Dx+P_qy z&z&8er#RnkEk6hUp++1_Bc`UDz0X1BV=gD$T!FHk?^FbO{8g#8!NOxx`U zb*JVEAMK(EixM_-UrAi}9aQf&d-#jQ73(&-le5sDdmFv09{UL$JwtUn^^nHH=A>*P z8Iv@oKhKjIAKtVsOw^B$nw0Cx8;IEwqy6`&UOM-oL;xifeTO6H70NUz{F zox`)Bdskd*;d0JA3d$A5(YQw{gV@#AUMWmTc!Mxc1Bp5x_JQ8BT8?od-Dc04 zT*zOUckqq$c>;O>s?%RaACpX1A1KmS^=<#a7b1=LBN0U>c^~lFuHGtA<)}v`U0uEDs9Xw~d&4N1;m$d6-$VZR3nKV_~3|CorwtcG!4X#x>s4r#b z+<5e1Hm(HyHA@1|l(}ZY%PP5tV`{A^KH6M1((|qFfJF=&7UW*d&j^PtR_gYKv2Uw_ ziALg_>;XkK_n*Tytd%qwFp9Iq>XX!Wk$>LA%U}c@10Njf6&A!%0;t1yLW^>(W-*@- z6sYMh*dj^w;<8w@q(-r+k5?HOfjg;MRB^7<~uy*DJ*DyuzpFK4m zTKZ^RV;GEqF?HEKpJ{@2Xg|KWoG8sYOKu&PpHnSEV3vRp&0g-h=5a%uyVuEH6% zsYFx=?cT_U;5rgXDU1zut~dF~4j;){4!d+5fx(sOpLwDkz>}T3o*)B8*$sFax5*w* zspmeGHnS1+w_3sad!n3?QSEe1B8;oI{&4tw%;lOQgYMrvng!uJD6$Iw+U{Q=@W?lJF zHWt+`MrElx0Z5J9Wkmvi>>!lGO<_|>(#6S^$}BuZuc#Do_NndDqLC4=`4gB0XGe7Wre(`vNE5Q zCj$EJXo524B^9pIP#$!5kKe$IIA~zb%`cKvCeSAurG5r)&k>_8aNDkHHsEiSAtcv^ z>-J`eC~KSkrg3e!bWV{dfR--DQ+4km^GOj@TzH|M`x|H;gA;q*Kg@Y0Nh%2Fm)R>dM{*l0D@+4`Q0fz{pt}B#WwF8i`Yv^LC!cf-8+#MZXM5$jNy{ca zbXu|(IqwTE@os^8thLj=F-8f|DK|x{1=+=e4RS-t?K%yO%38;2xbqHjrq$%4ZE6`D zKiHfjeCkUX?RUoNm4Z+@We}AK${zhA80TwV=wSbqTA*c-Gh8>Mt94!qGAOgptRktAq;qFXO{nF+8YhA7A$3K;!7bPRg7d~h?e-?aEmfL-rNmsz1 zd+;vSTBNfETFZ8@R;Me}#_M(=qpth;`mbej^d@qcKm;;8t`{;j6<+WC8=JV{Z{E#@ zDXZRgdh@RGFCEsRhT43be)T|@jnU)_Oj<}`WGYeXSf~pw@t}Nh6HK~n(1apUB1m;6 zJ?Jy=raHg-88(1)bIk93&XdVXDX6IV6<#=7Mm2^}*kX)H=Z$)n#>qcyjl)V$3=tmZ zFvKQ8l4qb!Q!H<`+*ajGN0t;-bg<8<<`<-CToUTzzB`V44~;H11i!G zysCqqxw#Bp_8cOZ)tsDG>Nlpt8Xw6FagJbY}nt|ZjOEKTVhv3*w@()gnbQ-AOuUUfoex-gn zH1=c}mC0chL=b-TF4oy)>ejf}2qX9g6^<0atA?+twov7%Pa=%lzWSP=j0ut6h4;%( zX37YVYCq9!)uFwzPI9`U6!UgFCxV$S>a@Hyk9m)Z+KLmeSf~j{1zClyYN>2~DS6 zYuQGGv;!4OE30dpI(g39`gyT70DyNGWyQKQe!f-t#AIr~P!fp!4+FMMUnpy?*Yf2x@suN)ObpWg!hhWC3dj!h!RHBMj010o z>~ZgqxdA)m!|+UbT!$2%~r9TGk&RJBIk$k+(A5{PH z$^p_=sV{>*?G}09f&ty+=_Xy~NW@MnaL@^rSWr=MM1e~Y$@2=gbChyodB+FY9JA1;H#lR{AS2c(M)wTXDcCeN)zu~U@01N`-G zTW+}idSqUD!?Ec~wd}@sNu$*Rja4+^qvFPwqfwpzTtwOj04a;b3(tI^ZIkE-OTJEU zaPz}Zof+S?uLJ*F)%+BA(&|8d0TcC$)wt*PS8-;wz=>nfXj6s@RQLy^Iy0+;*yw&q zZEC7B-A*5oRz!|`=E?o-Dq1j!ysy}B`tRp`S~BJRvw?2duTCHN=r@x;4A>sex z;)nP1kE8vL|Cw{!=F(r0;!CwAMF@V|rNCRFV}%#GE-k+J(6w}pOuDg8jk4o<&0kOZ zzqy9||GR5GtOiB?{Iatn&k(%FVC7w9W#uBkdAkK(TP0D!fI5k8!vnLzaPZj&WA77R z2Adz8<%L9No%imF&HL*};yW+@n`KG`E^F;XK>}!HSxV&z>W@Y@>pD#7e1XDRRaR(H zgKQKsL0|hNqxV+rz2kpUUw<#3%@N$Zo_uq|y7?|mh^3cL%WU|bx{m>CZ`#DT<8`X$ zqjMZ4HZQTR^7?_VcTRaf#5y;1c?|8I(A)fNcj>ANWr}uxm;x%_(kO3@#gyJQMuUeI z?Pi>(nd;tahAB@IaQWVt6Y?K5LrW%FBM#_sycXlNu)5|sCOj3zXPC)iHXpS}K zWumBnUgS6sVC);661>k3S%M)$n7dIR^MY z;^wuzGWtJUY|Ylz!nVCU^jcVl3)FU;asHn5452&2mQfzZIzSY46Bp|d#UcE4Bj z7!!9+O{N`FWpNY|Y^vpI;Xu={ou_AkGYHgdRaf{?KXj@)zgj87Pu4F28nGYBxxXvD zJi4t18jhR2t{QRvbI|q9n>o%7?*&<~{;94PgNfkG0s^7q2&K?U#7{x9?qilM9DKD^&)N5MSmvjkt88CU_ml>%=90clh1pJkVHe zry7}h=@do6MTNLxVS9#W{8j-3zrbt35cTc53$yNj*Xho_hzFV9O$7?3hi;18P!8AG zN{Il9*5PDbkeO(~CXOw#8)j7IQ-y~nHGSMf+c4ZzY5Dj&90+pBL3Bc9fJB;~R&uEp*9V|3TEp1u zg>RCo;Ud?LFy?DZ*O_4Zh0@IA98&h=o@FIz0HdzT^SRaW7|y}wyq<~bDW}Ab z*L+TMtzLVbNzobbS40y?&$U#Ydk%&oMr4(f$^wZs7Ayyy{%WEfSARitq>UblhZ>j7 zBz?hk^hAAkvg$Lb`hjVrA{Uoj5T64TOR+5`DF9Lp}Xn_j7)a$fcKVlA-r zwp*Rl0iDB9$T(ZrcC>M=qgYz~*iTX~y}!HkpjwnFKoZ)et~Cqnn8b@g)-YnFi+Yb8q~d@-@D@u;vJta3A&G<0A3({e@?bcesq z{RqJ;VwBROI4i)%r2s6SV5zkYpjN)`NiCYFn^(rh&d zA=QKpd}fnQDj{IbaQhDb=ahT`zY94LqN2va_SSi#+V&j$q%Byf&MX%+1R7K5tJf0X z4}m;C6gZ^}Ub4(%Ux6<+aP2t$$7>fOxIMudjoeVA3Na#&~UI78a4>EpH0&$8;l$oPCnsn2q4`S&8@!iA+;Tp&Bm*5 zPNW{5PjcnQDU>aTcsuDDz#ryt5tX+wCJq|R?w79;*m~1b6vBMwL58f}#iTtojy7VCN`#v=%etU*%!czBfE9S5!aZoqH z9{J(H+jjc$U1)A3aURw=2=H`6#@|wG=iC#QqYg%dcL-(leczabFifb#ib-X>3$^TG zO``pB`bpoOHFBA8Xrd*5o^*4=xbqzs9)$Q^`vzBRu%Q))gs1{U4gK>}dhK}8^Tu$; zf$^XQ!-kbzt!?!3kt?cDiPQX|nxyjChaC^kpV4A8J4+7psrA(mDl{ZTW6(*9!-qsz zp6sx|0h34?lfY1_#XOb*V#6@zc7^u zCxTTTMtAXS8puHzN7g-+kt%|b6t4Q<&yNp(?;ChHr5E+Pft{c)?;tWDm+PpU#kg2A z&OhE1*t?M`&2PRES{DZFn|^oh*GmaYyqz)#KN~<(xe@JoUx%EQUs9CH{EgjhA-|7y zT$GZHr+|6CBC=BDT&&dl47f_bcH~wR z**Go@|DuZxwXx}9<^0Z_fu8TowvX~7Xrn|!`i7Gd)r)`Z%O=b=M2vK?^x26?cgd3m zPQ$Xfl8-h4-Q(`R?qRY%1jMu;X;kF0Tp^nnRPsIFDQI>4a^ZMVnc%JM5O z)!E-Lv_@Iz!hJ}wzLNCRvU2rLbpWX$!eJnsPpar7O>gw>!(iU_pM{Ds2wgh-Adv zciV4EBPLnFOBW?3wSUb9tmDR~FY_q0-gj7awSmDch)X_ib0-@cZ-ZwaJMyv?P~=?W z-zOmix?rKK|Butb7_EB~tj_UJ`!Djakm7^i+fMcOZqb%AaLM3UI=D8V(hcwb`AYKW z=W~TMAHwmr=l_{5f~NS6Yu$Ns-{`hxK&;@5V89Ok6}3(=I5ctU@rS~ZhVQbH4Ph}3 z_hZ}Xvi9N3T~?b}6FuYP*)&X)4;~zP8q$+emONE-J>N^nUy=9y9q+)$CT=rWZuFN> z)eV1DBQ8uQTHw7yxYwxi%x7}jRqha41@kpV*aV|i*fR#l!cYhOdP%iyj}((VZ-LaC zK8=`eUZ6I z!gIV9YJm>TIksP zcDW98H`uDsA2xWL{^2dz$Y)q@Y3QaM%zUU480j6*&S+I^0|#(R&N_#w!j?OB-vGFZRgcx z`;cy;GFjs8As8&Yg4jB@q!iWhPedZw7|k2j7-Ga78|G*Fz9EucUrr+~nOm4I3%V9|Dv@m{|%*mDcl`St#j#;JPp&M=>pd1+0^&rGfzZ#4{RxGyW7 z_btM|?qc|FCvK!Afkw!24f|EA1v)zd=221!c#p3~B?Z4~i^@Yq*8SZ@1h^I0v`-~} zzGmU|1fU@u^^5^?kk@-hiCRRnK})$4e!^=Zab$-7*VF}kS6W`+Rn6sFP~EbKChkUw zbu)_k=iU$j8~$O-izvDU(uVaGU4m=wao_Och#6^FJmpPFPI2#+1F@`dzw*k!lRhcB z&wJ<6iA6P=-u~2ftL+v`+FQN0Y^Nq3cb}KVj z-zg?Oi&Raw0NpEXVE+5dbL3r7w{BnfQzlj6jZa=`$@igoQq*y}#>Fn7#YKc$0ok)J zXi<WK85&~zVixKb3shGvHn#`tUiR#$?R&q5qcKUn{- zm{Ut(spvzJI$}Iw=4d*x#Uj%haTX39f2$(rtFynJ19RX}%5^B^d+LiHvo+rkBgKI9 zsJZ&D{kPw}4U8B@6EVQ7{JjB5y8H%>QkOb)QWtDj@i^*o;=+6m>$EjWC_HY_20ozn z==?N?MgqOx_ay2&g+A59RK0aHr?c>HQ=#~AJu;ei<8j-)?}W6(Q`9?N@sE3Kc|#og zsV!$B+*Z@N|8S~;T2x$piTv|(wStg`we~4~)vULh>{!2xyd^|mY0lc8h0&(|qCf2h zSk}$DDBlF(yP>68ZB~#APgf|@PA>!5`4v2`FM4dbXKd3-xD@g3naM+!?p|w|)VYvJ zx@~iLJ7*1I*xuCclAPyVIz+Oa!3bDPno)-J+`m%y`EjpS_*6Wd7`muAPf|BR-NZ()4IVfq%*r>d+D0(xq%X#C!62XV3pu|Vfp>ZE3+Dbb z5Lg}Pj%6IMB}*1plhW#Y(Rhp2kgoF<`00HxUEws+2FRvKk>YO5h?#Dm)YH1}b?>rB z_p;i^VMg2syYK(vO#LKK`Q|)|I)B*X_#@TlQAxo1aesQPYQ83N;ZMfh?r-2c@CF;f za`L7NnoWw#cjtaEV0p(8YYX2F@yvfGKwb?qFa=ABn9+{9vP406M2X|`~Tki2zQWY+W#fWVrx zoud&yKhYLA&Rt9(XxXi!2ktGXRE^L45Hk1-3*f^**60e;y28?Vv={2v_k&~aRS$i4 zV@PIBE-P@^yJR~*L}v#5pb;jgM$r)>ICX8X%MFMeS{TJh{&jV(AThO>KwBV*dXCv= zu416Dcq&QzC7QqT6=*!23xpx=_eL|z>5zet@(C9?GqP+2v_hDpxhdc%*B&A8KlF*MJdi9UtE3;?kt7Xq+u zKkFkXI6PO-Q5Wh%9$C;nZPVzNBcb#tMg5=sB9vGeL73zyQMGhexq`x#7Hr$fJSof` zOYdYUZwAPx0hx96lT2r@YDmIu6-0fbs@5-{1$^olLx*w(ejDZ7;IvO)!Qc8fi>8o< zv|frUjQUV{c)I;xt{&MH^1g@ge81XL8>bD$zTlLOwvoAOw8nT>enK%=beCL{+-wZ8 zzU@^;F^xj5IwVaMjZXFajK;(Gid+#vXW^CDroR1r1|}%~-p?28R@hqw*^7`u@ABH+ z>BItgpXJ87aK`Hiy#p$jRZ}FU{*U&)JgBL={n!3%<$decYLy+@x?lxFMb?neRs{^|$_LnV_ZLUp)b08{0cZE8kmihVB7f3kiQ6EMa9;mq(q#+)hN z)=}shXH@=C2BYSFbs^%# zR}qiBzSex!upt5;fmkwxB%vKHvYBv`L=u! z-wMUc+Kc^ePcUuJPgNBV=^sz=Bh0yloCHu-Rqqqdjmtzn(ZLoV042TvZd`nX1)Ziz zc9;%FdJx=@Uxg{Idu+4Yf+Cj(lR!ZBzvE0}lv56wG{~~#qCt?QD#67N$uYLJ>F~UN zU6nGSzX1cV0TBZzN>e>H+2(#L(cLb?E#1* z<%kn*`3(;iMhIgsVt)oonF?ht;_Zn2CHF2f*Wm7D+y#2EJ9qw-2D`6)VK;v$AH*yg z)-OI&A<3$yY1$9m9HM5|J56=F^(Ts<{be@{$LwBTd~(FS$=nH`=+6Disa`nvv_w#t zcRRw97!@6LGuoN>_3rM|VbHggH)$u|0K5QuKW$Cbm#M|$TTFiMJCAzt&KZtQk%Qr< z#?lnAj*IC!VC2(qy=cPM2EO;@oBylhNR4vIQB`zU5F)QD~4cn4Fygp+~ zm2l8-9RCv_`-9nF+R+@8*~5|r_evialgU&5Q13S)X&|G5n%RUwV=SmLq3e9kJSFO> z`zk&3TAb-b@PurNZA$xnZ8G zSp$3I;q`4U{?CUaPwwjbqk~pz<1yqf{(541hhcELlJl^_Ws$6mjq1DS{t&bsm}9iQ zQ+vM)boQqGj;f5!XJb=*EG1`DjfJ|zgaCql+O}c3Yb{yZOE!1E=C`c@YA=_n&C5to zm1FniZjlDG&kg&q%lN!J*D_WD4D%AX`C1Kq?qcO;!#~L3VVSZxp{z&5=Li~N~!TcPm z3V&=5X7!G~uej^L_*%I&2U2w|lFXw&ruO=syh zr%EV+=~@M=%-0`lu2}H|O&IL>*Tp!8UItvDoh*ia^*DV^(ASq%3yri8Hbk@W^gCVG z@)*yu@LT(RZStdkE6}gmx%txw$(l7s)_nf-PN1Jx;{7rJ@BgPd@l4HNtpG*AA%se; z+^T%DIoi*6(6!3)Rb}8Q-9pMu%&*4Op`a%_V{f>`9SVLHcXhY!1Kz3I9wk2R1N~Dw z!t^4$uE}be=w|v`vYuP~fA?j*|Uc8VcP2FFSI^BM&|2h(hog6C)WJ8J>~C+Fr~hp z-iO88b6*J={+d55HGa3w?^gS$DGgWkhrsjGg5UPV7>0etA-@;K>bphl>i1d$K7H$Q zlnniJ+rNwH^`ua7$yf`!>+gd~+NLBu9{W{Jf1mnPb_c)U(K%(#->1I68_#z1dH(Mg z0~2&L(vk>Po=Qu{aj_jCPn*vLU@Kx(wTcK3fg^0}t}b+)=USNh5fyBEVxMYPZwBllsqe!9ln?(g+8r`H*R zMRpZQc>&TRWi{gx&LmA4;@fbG_DKhKcKLs}f_yg>U)r@eGO;$*mNOhc{b^Goe1CFs zBL9;u+_wt&aIjG$#i|Y>N)f;@i%>y@q4zdyEgGh$=4Q(XpDv`g?7b0q*uJHz?Zag+ z@(Tr$BtzB3#OVS1bQ)GQRWmcpIQ4_>kSN>ht!H2|GUBA0b0?OtkZssyU`rki?D$r}m5gpLv2*DJ0j1jQ;7Au@kHRa* z1vMoH^-Sf{@oCdoNXoiKKW*Zoy&))BXpdH;6Af?+W^U8!nl`^QjwAg-@zxG@e;N(` z!zb&HUNy%Tfnb&E@#D?*RLqa6H#mMeT4NKv+lX|doIpEmtFY4lo!_8%E=|GO<*GxF zrBpnfEm+?JYzaZxnSkk7rX_LswbyI7gu|3Q2^;5d)7S?dK|qsfzJ96&ZbM5}O3DRq z_9eZTp^rCuo3iR~@JqmhCST*Aj_gIRClk>#rsRr7yR>0qm$SIT0w@S0^Y4+iY>_>u9Lp8M~$Qz+#Vp01uXqdff&!v5V z%Ko&CF3lORZu&#Vo zY;)U<3UciZ>V*)sQhvueAzJGta-d?2W8^! zBu!{$0D5yBQ?5K@Yd!NeTeM9b8}(Niv*6v76@o9L~c`#Ld9u3*w)CmG(= zJxQs;4dtptlzUeC#h5J} z2D;bv?d!hLAZg#Wl-@z}l&>FDP8ZG#YPD9_Wf^W-y33E}viZocrbkl2Ny04qO$t-(&QxLSUW>--8Bl$~|0* zhQ0E-bSPbIvwfvEeOG|8Ky3hHY;3uDZ}T%wcBE&4AV?80Djot^t9Ik+7Xmz=T7DOTySQ$|)4_>7PN31A+^ylsgkdmUEW^uN>_x%>^A?N%hQV{DPs*PEs{t*{B+O-mc0}=Xy@Rz1wMfpJ1k- z`f}S~2UI$|gBRKOryYJ+6d-;p6N;p&W1}}^8y+F!%3=3mt?W*8xSh4mNF%|HAvXN= zGmBayX$?Ao*2zE01wjsC`y3HYwPY(R_QB4?|*c z=|`8KQA%0oH$)}8b9ilA2qyf{0KaUEVZnr&eTX%^in5Q^*ffawE&x3vh@a49G{fLj zuq}qyDz&ngP)aI(?>k_wnW%aS7|&7&azF#T~4&_eO>DtUW;MCBKHUy=C6Wb>ld* zW+o*yQYrN+E7@?F_gAcSf@uMFH|LFLhx?B4tT8A>Qh7a|60>gp>W~MOw9pjk8FhRU z=}sDz5x8#LC}i!KxD77!=N20D0=2BCX6|a6bu98nIjLktS#+59vyC<#R@<<^^mMqq zgW1G|ju>Z0N|yvf%U$yG)jd+e+2w`FTL#*xs4Q+&gHTlR1=%PeL!&i|WF(QCp5D;I zn!I(!ei05+x~RLM1#JSF7A(c6z&^T&n^kO7!{9I`e~@jw@A=n5HxR+7>Uy7WY>9&0aZ$KAG}g!n_@*xAIg zM~a4iYzAKc0rGyCthn&<-WGCi+2OQtzf_G%3A)35 z=2M<78~YEpDLiCjUFoz`1H;sU96HFtx3=pzhtKPTl^aoC$4oK$ve=5s9CJQ?CoQ#b z5RY*vau}bqx!8RS*Va~3EXSR2!apisIrbsL0$kolgML-PpwrZk%$he^sn3_ml=+n9 z&5c_~{vvwh0-2?!UOsGSw()sMDMnB|(aL6OnNgKZTKZikM~?0VwTkaIU7m(<53W63 zL1(Yu8i`t(Z%vuisHw2mN+-x2FK`g_3vXg%st^u8qO2{AWK7I^p!R<~KgRBbWDa(bJ!?M+asVhdYFU_tbsBb48p?=6x9=(m)@jykMpAzhZT8uN5A3h^d3x%I|4 z;NhK-3V}Q`bk?TxkZL(&a$Qp@MN%>wsq4h5mfsb>Q4Na-b@NLj+StB|^a_n1P@`42 z=4WNl>4=8MA*fLGvhe{C+<)2P43WE|i4w=Z6l#C)TeI$n9 z<_i-1)7;y*&^K4*<{o%E^Cl@lR(2Yscp)1u#3@9uI{$0>xWSAmaQ_c6%3ts3T>dNO z&sjQqTZ+&PqR4CC&|fP+EOcxvkJ;ZOMP1%6(jn;);;I&z(MwIJv6Vwoy^A=eOH%FH z-GAHbJ1_?1IQ9Hz<27% zhTS~KaBg#23#UH`6D_T_@Z*^zIj1igjPgY$Y1H^v3P?NmWD^P00wltb;X+vQSu zgh>bTkj((#&>4Ud{)R&@mbg$`wOiT2>_>O>TMX|&>H-;Wjsv-&7{TtADvrtKXG7f& zhfSv~YU`qQH7?m#)2)KeMwC(yKw=#bJ35GZ*Y%nF>U1DhO;yN_T8e7t+w_YC>nVtc zLj%QZ=$+cv(F0kXum_82^4SAj;)%QSdC&14;8sT(Tgt!bI4>7e_gwRl+*v-&>YJB> z`lepOw6L48RwA516e%-VpPV0TPsqa##y81Gma%4D-wKwO8JXe~7)@OgOKkPV+{;L7e~5(u2mRdcI%xa3$mU zYwrhm9jrMUKN_J7izo_-d)zp8afKmtF+kZGAm31|?M7%{AJZkKo&Vn>Z7jrE?=}c!|ofE`XE~4Ds9qeyjyl7{7Asxs|y1}H~2b#V|zYh73=#B z0A9(Ebe*P}Zmdyl4RMCaFmIbkW%Z-cJ9ysD*~r?V-{ zdaJw?Ittb^e%J(SJ29$4PC1yT(>Q=1ihX&UW{6OGi46&9?$A8*C_pU+8&A~}F2e(r zRPGl1Z!Cqlkf8)nUd`#{CmreC$}XaXa)Ad^6=8E1E(O(uTB}?^KC%mAh4t0c%oV2C zRn?`|l}J{??sVf`SHxJ#Ud^L}O%{>z#Il!n@(I)ik&iaEf-Jf12aP-wZJhxjXk*Qj z0>=<%`S}nP5qWo+<)%*2Oh=gm36!DvYpTZeZRz_IcQD1vCsx>dEt-HOW^(go#u~n9 zO^ozbKel0NKKXcaj~Ki83BI4WteyEG9V^jVg^x95boMGbW^(;+sNf5@dOm6=1U0u% zSb-8H!3SW2tZ&&&b*QTR>k~npqZmkBP%^2?G8+7>K)2kVU0}IE_oV;l-a4mK>N2Ks5kv`_YLF++r)ZI z9#7&4dP>Z{D|UGDbRMzI-d3l7EV?MN`fS|JkVBi?$SM%F6o$#w$-B1XPOlY6M-Hn)G8&w)kOU;Qm6KOhdlJ%f6pcZW_&a;4%c+quZxa`uRGKGp;ROOo=(GuA7h5k~ zpON0e@^)RaQ#gfv=@!t@+QjKmzqnM!@W;dhkF;r?nah)yf(f!`fW`gBsxD3r!HFV- z5YLp>w(CP);M_J&nf{$?nr^Pm-Ta_%5}F#zfy->jZbZ?u4k(2pAM4@Gu|XYER~Q>x zvp3s)hw{%?>CAQ{8Z8B!fgE;R&Cc7nRyM972N{7bLGu_92R@zEUm`XqC6;e8BFV&A z3??)a+=LhqSS6@H#gOba5q2_h)Z$hH-E0><1U4NDCfXOo6-9C2vv()K(#1=oqOnY` zycM)7<1E>LRc)>+BLASc)yBQtPXv5p9+2_N_DvrJ(I+lV@%6xVzV$beLTzu;8^6xpUz*}6LYzr!HR2FEhYkfQP5{z(0My{c#n6x#1##Rs{2vlZM*4GzXKrijmu3ytJ>>V|(cy5@ljGy!LlVUEu%&sNG_Ut$K-CyM4d@SELl$GpHv)XWqBakHow zmpJ?zMhtqE15Kk%(XeuQ8jbPczyB&+hd1t(OXM-lzcY?YZ|!!^t~58+-2fR`z6qyYLQO3jnZ0hz zj@ie74@<9Vt%5o6(p|5ctlbIofw=T$u-RZT`zZj7dzpr5$Mdh(UvvwL2Pp~!H6fCS zum;zZ`Ft*CnQIiy4$+I8)hM7838ksX6lhm;PKy~joo$e@i#LN;hPDkd=<=qmk;)lIf~;c3l@yDmgl2^+0KE6G$WgJpbmZ)vonwa2s6Miy$KHR z2JGV*0Yb@kur)wLI2k~5$E~OvWSfbYJ&V|={W{?Mdi zoH*y4-Ph>&*(i=HqZ;--pX_l8%bcK)FX7E$}wN>2lVzr90~^Ew1dQBj1E-r_eAJn#^~{xKY(baBSwNL>vY{ezF~6BxQ< z0dmD#z}cY7qU`!4CbBV~O4uQ}8)IDZ>)&`L;sc(s3QN53qra^%Nt*hI8rjqpaHEBW zj8@jS`0Yvs0B=nY(}f}(40lzj&lH}Qwlc!f#@zI;t=ANmI%anGdQ|tkDL_1RH!`~L zX6Rx=yd22s$!aF6_#n)SR&wR7zKNB}!|eJa4-51sHvW;>J2KQ>Ra9}zb`43<7I8-< z$dj_zR4s(^s9UHiRb#?Ur7RV+*vt8p*$VY842D-cu~y&E%x|6CSj`D(3o{@A!$L#t zR1$M`s#*c9zNu`7mtCLBRl5dDslwTu!VXCB+P6!j8jHm`Nr+&xnRUQ|(7G0H;Rih5U(aV86goa+0S<90+k<;^T6ECjbOE(4R)%qT*-4{mOxzMSm9R3ki zsZhc!U*$vtP&n!j!2fMN(Xafg?`uZU)Hvl4<af!g*zM$`1Aj<9P zLDVqSicGcL6gySPU>D>srk79NisRqm`30YWPsLN+aJ3)hf#374F6G&WiVZO#vA<0v zhv1goBrWx-aF_Yijrw5}v2o>j@vE`19yI;g>uBAE=-{Va0Xn7pg>+Va%58Hki}=y% zf$KBg%x4u zsZwuTfAc<>jrHT7Y|Ag<7V|e^@C-41);N1GihE8*bLPNNWh_5zMHH+ZAYEo>3?C|c z=BPcfQw^5_C^nn&%j+YXXDBoEg*&DzO*33HSa&m{hf(L1XgVRT^P|ZDsQF#luT;K! z9aIz>)hgE3 zG0kQB`2Pe$l|9YWR9p~sbrkGK$PZi4r--Lb54=SRldfO!LIkQJ&9MhYc!zr`8qK$x z9b1HAP%*wV+p76;lR~oO2+U_1Q|_flX!KMM>VgN{*vQHMz(_L?op}6NVnZ; z_1ZQ{UEE=DVyvGDtqz@F9Uo~m5&s3iww%Dik73#or#W025X$)(o z^Q}29&Ji^t9Fj*(ZQf_2^637N$ks>afOHz5kwQ$M3e#xbCYZg2w~K=}F-Bf4iib?o zP0a>W^DoCb3Kni;48{Dl3@hYRe@{IUzbrbmLVbW{RUJ{kri7ar&OLYD@F;o(oM-*5 zV$n}Ce{i>i;QzH^xZ4ZCo~Jn*EyL4e^YsfAel<@)Rr8|`^{s zIkpYTCZOiclX~hY#<_7f0}%8Z{gkDjm%4_1k?90P=#B2uVJFySxXm5i4rluLgwSEi zE)VP>bYl*?B_H^8HV}2bfs8~x-0*TtvaxxQ#qZ3>Cyqc{D6AR({bs{t_BY^qfCQmfcm&`_`Mq5StZla# z6@kN(95kREOA9>%J=+L~4b6>UG-W9lFuH+nOG(EvTAK5$moH#LYpMXZ*Q=XkW|+FQ zuaRB#9Gae+QYH=^ALbCRv2?SbqFUjbvpQzvTlj{>*dED5o|aAYPC491XTdY>D5Ero zCPW65Egpks>oM}%L?iuMb$hv}g=DRo*W__Mh(_Trl~fRNuDjcX$Vsp5DL<$xBBgIN zGS8BPy(ygoY$+j+{|h8bk1@$759#2CizSa3`KEUkjt9h4S05cb+z&ee37^7hozQ>G zIKu8)XrmmbRoAFQhZd+0ly)l_6TRFhN357Xr03TbnUC-jdgt*Bzv|>kCXdh8heZnW z0`$fLc&S(T8;>Is^A29k``U_9McJk`=u zbuLyM*xC_u$R8613w0rT*)2pCdqO&5gF_*A@^gE?cuI}aE5)N((&>E)UnE9f_d-Cd zi7N?fduw5zy+?x5g|VsIp!6{1ComMjx!a2^k{sZ-m(RE`g39CS%<42JS+u%osm&fR zscPoormMlcylzonq`3&>5AcDoxS(AUZ$wZKQg3@_YE|zZP&9qf*&l_Bi-?^pFSX6` zT!iicN7wGi#T=5QjSl%3eC?=lVeF4tQ74*MMAgg@Sb;gV(Rz&e87JalK;XfJ#18&% za%y`U?1`v!32{nifQ#sUD+i2oK&WNE#5`u3iP=tmkWQXD}3cC>j`t}FH%3oZ_^L)yRtVk^cNo!D(A!k)s|{z=?jL>hW4E*G?(@M zuwQ}49d(0^loj}uT+i(FCZ)tljxzLldC9!ry&5IG(h-mPFC%2j6&ns8Kr8bvx6&Wl zvDe{EiI;~tuH+bpP-Ia=xR;aTc#M__+HKKC52zb}oawa(mknwW31eEmI@n^r3|syF z;erLZ*bC5Gp7&ELaqIQofB3}~CB}uzxlG#GuyRsT`^(!@QtzI6Kp$Y6F9$2bQoG53 ztKi>P{u@AH5l4)ygd#_*BAu`d&1Z7m;sQ%u1;% zpvL5J+ew9NsIQI#*An0&b!!h@GfVHvAzKm$^e}Ek5F_Rl%SQ*}ER`x&`|k}A=ta>@ zqQH(OM@%oX@MbAv2B~l_O@iSJ?l67U(2V*e0vSB7ZI<9~E3#7S=%&#`#fHVgc-9$6 zbYchR%vN0)^!&mnAkn}GHX(#s<0ZIe* zu>H$4EcGhh+C4$enby7DNSGthPooBMy8ppCWt${5&7eh^RFxky)6CcFyS{@mRWN0a zECuRhD$APr2%gt{{3QdM?+E2#Z~Z+?T{1kAehKX+CC!EvJG>UYxnkahhn$Dz|)@yK%#6ATv zed@l89+MI>knj*?eg{-P^$}0|mX&0R$iJ>NT49!PCv6=LL>1=IYCD`Am&#hJ6Q<#( z*RhPVITU++X7BhxNrm|MKpnk&LvP{v3OlW8_R!XpO`BWKu{zQl=~9=(O}%?z|HSX3 z`^bD>Z@zK>D1XHSeLvs_Hk*#!Nj8h-EyR=&@EKQmh<3}%d%zx|tU-&WypG^PfjQbg z#fj9>8<IIcmD276oRl=rtkn^vGpO6oac*Z#n z=JZEC5rQiQ5v{$w+RyrZTn(`>MMKWz7g7)h!7Rc{iGn8sIZz@hfHCjyDNh6%t!x@3 zZQQ63m?KZfVmd*2IaAov`?FXuWZCA68A~&bLlLn;ZH4e|EsoZPYIwSMe5xH7+A)R& zFycD7QFs3C;k+;LdaspxV|kR4gY6zEt+i=q`)s$Ot*Y_etz!*yCpK&{xS=A4Uz=Lr zj=%jV!#17H7UpGy4Z^QI0xh>tl>5KyW2|Z?kYM$`ta%*mE?#75Du_cmrTAQ$8Dc-R zVg4yS?A5EgL&}~>OHV}0v_20pyi6bP!8<6ntuH{qrkGv9%6uNBdndV)DsB)Zg$65K z%=%GhV4=td_Uni<*JBM`!cqORGtLZOS5&1{ofA3ouz23dpF?aMii%FK8+#0s^T!Et zz0)87=}aSVZs2ePvRFJ`+WA2LdC3$x>um z-y|{#kY(yhVMV=0Txdpj>2%IM3xDK1;07%cuudetqei<#z{|K)|HK&C%Xmr9!g6<2 z&(&3!t&1~~2x{9!QrD?%-N;UE#GPqtl>sW|a16V_W}Q&|_~(!A{6l|jX}h%zIc~du zfj>XJT$Xe4l52rSlE2}#E|R~_dahA5fo-5ah@ej!t#e`v&dU8qZk-2|_X#xC#P3ff z)wsTZAA`|q$I%WH)17n^C}x&pUbTKGM-y6L$b9Lh;!N!5FljJWenOlGfUDwaXM+#{ zfFc4tZ3E8S?vU!5+j|5*K2Gzrl1$%hWy{7Wf%Kx81^_{+)jnK)uh*b1{{%ukauA=J;dg2LJStNHSAVwa_fL?U+id2yx*Q5w|7qtvM-SG-wPBri34Le2T{yW^*aUi~&RdoxP?v-~~^3Gw)S%;kaokkkAZm(*f0zuFW>>r08^fzLF3>-+J*i4Wb< zR3p&03WFgXpYXY~jNS;#mBh;&GE4JQguw^Dx8+{cldU}{DDo$6OHZuiZZdo2va(j99;Z-vNfOA?6YdDn#X5z0=7K+>^0VWc8+V-d=4mU)_jhe|DJ1n a13h26bhGvNU#qV2x1+8{%6~og$NvH!_lQ{l literal 0 HcmV?d00001