diff --git a/docs/guides/druid/concepts/_index.md b/docs/guides/druid/concepts/_index.md index 67c3be7748..a3832f9450 100755 --- a/docs/guides/druid/concepts/_index.md +++ b/docs/guides/druid/concepts/_index.md @@ -5,6 +5,6 @@ menu: identifier: guides-druid-concepts name: Concepts parent: guides-druid - weight: 20 + weight: 10 menu_name: docs_{{ .version }} ---- +--- \ No newline at end of file diff --git a/docs/guides/druid/concepts/appbinding.md b/docs/guides/druid/concepts/appbinding.md index 60f9b5bb4d..e61fc1c30f 100644 --- a/docs/guides/druid/concepts/appbinding.md +++ b/docs/guides/druid/concepts/appbinding.md @@ -22,218 +22,127 @@ If you deploy a database using [KubeDB](https://kubedb.com/docs/latest/welcome/) KubeDB uses [Stash](https://appscode.com/products/stash/) to perform backup/recovery of databases. Stash needs to know how to connect with a target database and the credentials necessary to access it. This is done via an `AppBinding`. -[//]: # (## AppBinding CRD Specification) +## AppBinding CRD Specification -[//]: # () -[//]: # (Like any official Kubernetes resource, an `AppBinding` has `TypeMeta`, `ObjectMeta` and `Spec` sections. However, unlike other Kubernetes resources, it does not have a `Status` section.) +Like any official Kubernetes resource, an `AppBinding` has `TypeMeta`, `ObjectMeta` and `Spec` sections. However, unlike other Kubernetes resources, it does not have a `Status` section. -[//]: # () -[//]: # (An `AppBinding` object created by `KubeDB` for PostgreSQL database is shown below,) +An `AppBinding` object created by `KubeDB` for Kafka database is shown below, -[//]: # () -[//]: # (```yaml) +```yaml +apiVersion: appcatalog.appscode.com/v1alpha1 +kind: AppBinding +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"kubedb.com/v1alpha2","kind":"Kafka","metadata":{"annotations":{},"name":"kafka","namespace":"demo"},"spec":{"enableSSL":true,"monitor":{"agent":"prometheus.io/operator","prometheus":{"exporter":{"port":9091},"serviceMonitor":{"interval":"10s","labels":{"release":"prometheus"}}}},"replicas":3,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"storageType":"Durable","deletionPolicy":"WipeOut","tls":{"issuerRef":{"apiGroup":"cert-manager.io","kind":"Issuer","name":"kafka-ca-issuer"}},"version":"3.6.1"}} + creationTimestamp: "2023-03-27T08:04:43Z" + generation: 1 + labels: + app.kubernetes.io/component: database + app.kubernetes.io/instance: kafka + app.kubernetes.io/managed-by: kubedb.com + app.kubernetes.io/name: kafkas.kubedb.com + name: kafka + namespace: demo + ownerReferences: + - apiVersion: kubedb.com/v1alpha2 + blockOwnerDeletion: true + controller: true + kind: Kafka + name: kafka + uid: a4d3bd6d-798d-4789-a228-6eed057ccbb2 + resourceVersion: "409855" + uid: 946988c0-15ef-4ee8-b489-b7ea9be3f97e +spec: + appRef: + apiGroup: kubedb.com + kind: Kafka + name: kafka + namespace: demo + clientConfig: + caBundle: dGhpcyBpcyBub3QgYSBjZXJ0 + service: + name: kafka-pods + port: 9092 + scheme: https + secret: + name: kafka-admin-cred + tlsSecret: + name: kafka-client-cert + type: kubedb.com/kafka + version: 3.6.1 +``` -[//]: # (apiVersion: appcatalog.appscode.com/v1alpha1) +Here, we are going to describe the sections of an `AppBinding` crd. -[//]: # (kind: AppBinding) +### AppBinding `Spec` -[//]: # (metadata:) +An `AppBinding` object has the following fields in the `spec` section: -[//]: # ( name: quick-postgres) +#### spec.type -[//]: # ( namespace: demo) +`spec.type` is an optional field that indicates the type of the app that this `AppBinding` is pointing to. -[//]: # ( labels:) + + -[//]: # ( app.kubernetes.io/version: "10.2"-v2) +#### spec.secret -[//]: # ( app.kubernetes.io/name: postgreses.kubedb.com) +`spec.secret` specifies the name of the secret which contains the credentials that are required to access the database. This secret must be in the same namespace as the `AppBinding`. -[//]: # ( app.kubernetes.io/instance: quick-postgres) +This secret must contain the following keys for Kafka: -[//]: # (spec:) +| Key | Usage | +| ---------- |------------------------------------------------| +| `username` | Username of the target Kafka instance. | +| `password` | Password for the user specified by `username`. | -[//]: # ( type: kubedb.com/postgres) -[//]: # ( secret:) +#### spec.appRef +appRef refers to the underlying application. It has 4 fields named `apiGroup`, `kind`, `name` & `namespace`. -[//]: # ( name: quick-postgres-auth) +#### spec.clientConfig -[//]: # ( clientConfig:) +`spec.clientConfig` defines how to communicate with the target database. You can use either a URL or a Kubernetes service to connect with the database. You don't have to specify both of them. -[//]: # ( service:) +You can configure following fields in `spec.clientConfig` section: -[//]: # ( name: quick-postgres) +- **spec.clientConfig.url** -[//]: # ( path: /) + `spec.clientConfig.url` gives the location of the database, in standard URL form (i.e. `[scheme://]host:port/[path]`). This is particularly useful when the target database is running outside the Kubernetes cluster. If your database is running inside the cluster, use `spec.clientConfig.service` section instead. -[//]: # ( port: 5432) +> Note that, attempting to use a user or basic auth (e.g. `user:password@host:port`) is not allowed. Stash will insert them automatically from the respective secret. Fragments ("#...") and query parameters ("?...") are not allowed either. -[//]: # ( query: sslmode=disable) +- **spec.clientConfig.service** -[//]: # ( scheme: postgresql) + If you are running the database inside the Kubernetes cluster, you can use Kubernetes service to connect with the database. You have to specify the following fields in `spec.clientConfig.service` section if you manually create an `AppBinding` object. -[//]: # ( secretTransforms:) + - **name :** `name` indicates the name of the service that connects with the target database. + - **scheme :** `scheme` specifies the scheme (i.e. http, https) to use to connect with the database. + - **port :** `port` specifies the port where the target database is running. -[//]: # ( - renameKey:) +- **spec.clientConfig.insecureSkipTLSVerify** -[//]: # ( from: POSTGRES_USER) + `spec.clientConfig.insecureSkipTLSVerify` is used to disable TLS certificate verification while connecting with the database. We strongly discourage to disable TLS verification during backup. You should provide the respective CA bundle through `spec.clientConfig.caBundle` field instead. -[//]: # ( to: username) +- **spec.clientConfig.caBundle** -[//]: # ( - renameKey:) + `spec.clientConfig.caBundle` is a PEM encoded CA bundle which will be used to validate the serving certificate of the database. -[//]: # ( from: POSTGRES_PASSWORD) +## Next Steps -[//]: # ( to: password) - -[//]: # ( version: "10.2") - -[//]: # (```) - -[//]: # () -[//]: # (Here, we are going to describe the sections of an `AppBinding` crd.) - -[//]: # () -[//]: # (### AppBinding `Spec`) - -[//]: # () -[//]: # (An `AppBinding` object has the following fields in the `spec` section:) - -[//]: # () -[//]: # (#### spec.type) - -[//]: # () -[//]: # (`spec.type` is an optional field that indicates the type of the app that this `AppBinding` is pointing to. Stash uses this field to resolve the values of `TARGET_APP_TYPE`, `TARGET_APP_GROUP` and `TARGET_APP_RESOURCE` variables of [BackupBlueprint](https://appscode.com/products/stash/latest/concepts/crds/backupblueprint/) object.) - -[//]: # () -[//]: # (This field follows the following format: `/`. The above AppBinding is pointing to a `postgres` resource under `kubedb.com` group.) - -[//]: # () -[//]: # (Here, the variables are parsed as follows:) - -[//]: # () -[//]: # (| Variable | Usage |) - -[//]: # (| --------------------- | --------------------------------------------------------------------------------------------------------------------------------- |) - -[//]: # (| `TARGET_APP_GROUP` | Represents the application group where the respective app belongs (i.e: `kubedb.com`). |) - -[//]: # (| `TARGET_APP_RESOURCE` | Represents the resource under that application group that this appbinding represents (i.e: `postgres`). |) - -[//]: # (| `TARGET_APP_TYPE` | Represents the complete type of the application. It's simply `TARGET_APP_GROUP/TARGET_APP_RESOURCE` (i.e: `kubedb.com/postgres`). |) - -[//]: # () -[//]: # (#### spec.secret) - -[//]: # () -[//]: # (`spec.secret` specifies the name of the secret which contains the credentials that are required to access the database. This secret must be in the same namespace as the `AppBinding`.) - -[//]: # () -[//]: # (This secret must contain the following keys:) - -[//]: # () -[//]: # (PostgreSQL :) - -[//]: # () -[//]: # (| Key | Usage |) - -[//]: # (| ------------------- | --------------------------------------------------- |) - -[//]: # (| `POSTGRES_USER` | Username of the target database. |) - -[//]: # (| `POSTGRES_PASSWORD` | Password for the user specified by `POSTGRES_USER`. |) - -[//]: # () -[//]: # (MySQL :) - -[//]: # () -[//]: # (| Key | Usage |) - -[//]: # (| ---------- | ---------------------------------------------- |) - -[//]: # (| `username` | Username of the target database. |) - -[//]: # (| `password` | Password for the user specified by `username`. |) - -[//]: # () -[//]: # (MongoDB :) - -[//]: # () -[//]: # (| Key | Usage |) - -[//]: # (| ---------- | ---------------------------------------------- |) - -[//]: # (| `username` | Username of the target database. |) - -[//]: # (| `password` | Password for the user specified by `username`. |) - -[//]: # () -[//]: # (Elasticsearch:) - -[//]: # () -[//]: # (| Key | Usage |) - -[//]: # (| ---------------- | ----------------------- |) - -[//]: # (| `ADMIN_USERNAME` | Admin username |) - -[//]: # (| `ADMIN_PASSWORD` | Password for admin user |) - -[//]: # () -[//]: # (#### spec.clientConfig) - -[//]: # () -[//]: # (`spec.clientConfig` defines how to communicate with the target database. You can use either an URL or a Kubernetes service to connect with the database. You don't have to specify both of them.) - -[//]: # () -[//]: # (You can configure following fields in `spec.clientConfig` section:) - -[//]: # () -[//]: # (- **spec.clientConfig.url**) - -[//]: # () -[//]: # ( `spec.clientConfig.url` gives the location of the database, in standard URL form (i.e. `[scheme://]host:port/[path]`). This is particularly useful when the target database is running outside of the Kubernetes cluster. If your database is running inside the cluster, use `spec.clientConfig.service` section instead.) - -[//]: # () -[//]: # ( > Note that, attempting to use a user or basic auth (e.g. `user:password@host:port`) is not allowed. Stash will insert them automatically from the respective secret. Fragments ("#...") and query parameters ("?...") are not allowed either.) - -[//]: # () -[//]: # (- **spec.clientConfig.service**) - -[//]: # () -[//]: # ( If you are running the database inside the Kubernetes cluster, you can use Kubernetes service to connect with the database. You have to specify the following fields in `spec.clientConfig.service` section if you manually create an `AppBinding` object.) - -[//]: # () -[//]: # ( - **name :** `name` indicates the name of the service that connects with the target database.) - -[//]: # ( - **scheme :** `scheme` specifies the scheme (i.e. http, https) to use to connect with the database.) - -[//]: # ( - **port :** `port` specifies the port where the target database is running.) - -[//]: # () -[//]: # (- **spec.clientConfig.insecureSkipTLSVerify**) - -[//]: # () -[//]: # ( `spec.clientConfig.insecureSkipTLSVerify` is used to disable TLS certificate verification while connecting with the database. We strongly discourage to disable TLS verification during backup. You should provide the respective CA bundle through `spec.clientConfig.caBundle` field instead.) - -[//]: # () -[//]: # (- **spec.clientConfig.caBundle**) - -[//]: # () -[//]: # ( `spec.clientConfig.caBundle` is a PEM encoded CA bundle which will be used to validate the serving certificate of the database.) - -[//]: # (## Next Steps) - -[//]: # () -[//]: # (- Learn how to use KubeDB to manage various databases [here](/docs/guides/README.md).) - -[//]: # (- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).) +- Learn how to use KubeDB to manage various databases [here](/docs/guides/README.md). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/druid/concepts/catalog.md b/docs/guides/druid/concepts/catalog.md deleted file mode 100644 index 57ef475dc8..0000000000 --- a/docs/guides/druid/concepts/catalog.md +++ /dev/null @@ -1,111 +0,0 @@ ---- -title: DruidVersion CRD -menu: - docs_{{ .version }}: - identifier: guides-druid-concepts-catalog - name: DruidVersion - parent: guides-druid-concepts - weight: 15 -menu_name: docs_{{ .version }} -section_menu_id: guides ---- - -> New to KubeDB? Please start [here](/docs/README.md). - -# DruidVersion - -[//]: # (## What is DruidVersion) - -[//]: # () -[//]: # (`DruidVersion` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration to specify the docker images to be used for [PgBouncer](https://pgbouncer.github.io/) server deployed with KubeDB in a Kubernetes native way.) - -[//]: # () -[//]: # (When you install KubeDB, a `DruidVersion` custom resource will be created automatically for every supported PgBouncer release versions. You have to specify the name of `DruidVersion` crd in `spec.version` field of [PgBouncer](/docs/guides/pgbouncer/concepts/pgbouncer.md) crd. Then, KubeDB will use the docker images specified in the `DruidVersion` crd to create your expected PgBouncer instance.) - -[//]: # () -[//]: # (Using a separate crd for specifying respective docker image names allow us to modify the images independent of KubeDB operator. This will also allow the users to use a custom PgBouncer image for their server. For more details about how to use custom image with PgBouncer in KubeDB, please visit [here](/docs/guides/pgbouncer/custom-versions/setup.md).) - -[//]: # (## DruidVersion Specification) - -[//]: # () -[//]: # (As with all other Kubernetes objects, a DruidVersion needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section.) - -[//]: # () -[//]: # (```yaml) - -[//]: # (apiVersion: catalog.kubedb.com/v1alpha1) - -[//]: # (kind: DruidVersion) - -[//]: # (metadata:) - -[//]: # ( name: "1.17.0") - -[//]: # ( labels:) - -[//]: # ( app: kubedb) - -[//]: # (spec:) - -[//]: # ( deprecated: false) - -[//]: # ( version: "1.17.0") - -[//]: # ( pgBouncer:) - -[//]: # ( image: "${KUBEDB_CATALOG_REGISTRY}/pgbouncer:1.17.0") - -[//]: # ( exporter:) - -[//]: # ( image: "${KUBEDB_CATALOG_REGISTRY}/pgbouncer_exporter:v0.1.1") - -[//]: # (```) - -[//]: # () -[//]: # (### metadata.name) - -[//]: # () -[//]: # (`metadata.name` is a required field that specifies the name of the `DruidVersion` crd. You have to specify this name in `spec.version` field of [PgBouncer](/docs/guides/pgbouncer/concepts/pgbouncer.md) crd.) - -[//]: # () -[//]: # (We follow this convention for naming DruidVersion crd:) - -[//]: # () -[//]: # (- Name format: `{Original pgbouncer image version}-{modification tag}`) - -[//]: # () -[//]: # (We plan to modify original PgBouncer docker images to support additional features. Re-tagging the image with v1, v2 etc. modification tag helps separating newer iterations from the older ones. An image with higher modification tag will have more features than the images with lower modification tag. Hence, it is recommended to use DruidVersion crd with highest modification tag to take advantage of the latest features.) - -[//]: # () -[//]: # (### spec.version) - -[//]: # () -[//]: # (`spec.version` is a required field that specifies the original version of PgBouncer that has been used to build the docker image specified in `spec.server.image` field.) - -[//]: # () -[//]: # (### spec.deprecated) - -[//]: # () -[//]: # (`spec.deprecated` is an optional field that specifies whether the docker images specified here is supported by the current KubeDB operator.) - -[//]: # () -[//]: # (The default value of this field is `false`. If `spec.deprecated` is set `true`, KubeDB operator will not create the server and other respective resources for this version.) - -[//]: # () -[//]: # (### spec.pgBouncer.image) - -[//]: # () -[//]: # (`spec.pgBouncer.image` is a required field that specifies the docker image which will be used to create Petset by KubeDB operator to create expected PgBouncer server.) - -[//]: # () -[//]: # (### spec.exporter.image) - -[//]: # () -[//]: # (`spec.exporter.image` is a required field that specifies the image which will be used to export Prometheus metrics.) - -[//]: # (## Next Steps) - -[//]: # () -[//]: # (- Learn about PgBouncer crd [here](/docs/guides/pgbouncer/concepts/catalog.md).) - -[//]: # (- Deploy your first PgBouncer server with KubeDB by following the guide [here](/docs/guides/pgbouncer/quickstart/quickstart.md).) \ No newline at end of file diff --git a/docs/guides/druid/concepts/druid.md b/docs/guides/druid/concepts/druid.md index 41824e7cfe..d74ab8c737 100644 --- a/docs/guides/druid/concepts/druid.md +++ b/docs/guides/druid/concepts/druid.md @@ -12,357 +12,419 @@ section_menu_id: guides > New to KubeDB? Please start [here](/docs/README.md). +# Kafka + +## What is Kafka + +`Kafka` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [Kafka](https://kafka.apache.org/) in a Kubernetes native way. You only need to describe the desired database configuration in a `Kafka`object, and the KubeDB operator will create Kubernetes objects in the desired state for you. + +## Kafka Spec + +As with all other Kubernetes objects, a Kafka needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. Below is an example Kafka object. + +```yaml +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka + namespace: demo +spec: + authSecret: + name: kafka-admin-cred + configSecret: + name: kafka-custom-config + enableSSL: true + healthChecker: + failureThreshold: 3 + periodSeconds: 20 + timeoutSeconds: 10 + keystoreCredSecret: + name: kafka-keystore-cred + podTemplate: + metadata: + annotations: + passMe: ToDatabasePod + labels: + thisLabel: willGoToPod + controller: + annotations: + passMe: ToPetSet + labels: + thisLabel: willGoToSts + storageType: Durable + deletionPolicy: DoNotTerminate + tls: + certificates: + - alias: server + secretName: kafka-server-cert + - alias: client + secretName: kafka-client-cert + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: kafka-ca-issuer + topology: + broker: + podTemplate: + spec: + containers: + - name: kafka + resources: + requests: + cpu: 500m + memory: 1024Mi + limits: + cpu: 700m + memory: 2Gi + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi + storageClassName: standard + controller: + replicas: 1 + podTemplate: + spec: + containers: + - name: kafka + resources: + requests: + cpu: 500m + memory: 1024Mi + limits: + cpu: 700m + memory: 2Gi + monitor: + agent: prometheus.io/operator + prometheus: + exporter: + port: 56790 + serviceMonitor: + labels: + release: prometheus + interval: 10s + version: 3.6.1 +``` + +### spec.version + +`spec.version` is a required field specifying the name of the [KafkaVersion](/docs/guides/kafka/concepts/kafkaversion.md) crd where the docker images are specified. Currently, when you install KubeDB, it creates the following `Kafka` resources, + +- `3.3.2` +- `3.4.1` +- `3.5.1` +- `3.5.2` +- `3.6.0` +- `3.6.1` + +### spec.replicas + +`spec.replicas` the number of members in Kafka replicaset. + +If `spec.topology` is set, then `spec.replicas` needs to be empty. Instead use `spec.topology.controller.replicas` and `spec.topology.broker.replicas`. You need to set both of them for topology clustering. + +KubeDB uses `PodDisruptionBudget` to ensure that majority of these replicas are available during [voluntary disruptions](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#voluntary-and-involuntary-disruptions) so that quorum is maintained. + +### spec.authSecret + +`spec.authSecret` is an optional field that points to a Secret used to hold credentials for `kafka` admin user. If not set, KubeDB operator creates a new Secret `{kafka-object-name}-auth` for storing the password for `admin` user for each Kafka object. + +We can use this field in 3 mode. +1. Using an external secret. In this case, You need to create an auth secret first with required fields, then specify the secret name when creating the Kafka object using `spec.authSecret.name` & set `spec.authSecret.externallyManaged` to true. +```yaml +authSecret: + name: + externallyManaged: true +``` + +2. Specifying the secret name only. In this case, You need to specify the secret name when creating the Kafka object using `spec.authSecret.name`. `externallyManaged` is by default false. +```yaml +authSecret: + name: +``` + +3. Let KubeDB do everything for you. In this case, no work for you. + +AuthSecret contains a `user` key and a `password` key which contains the `username` and `password` respectively for Kafka `admin` user. + +Example: + +```bash +$ kubectl create secret generic kf-auth -n demo \ +--from-literal=username=jhon-doe \ +--from-literal=password=6q8u_2jMOW-OOZXk +secret "kf-auth" created +``` + +```yaml +apiVersion: v1 +data: + password: NnE4dV8yak1PVy1PT1pYaw== + username: amhvbi1kb2U= +kind: Secret +metadata: + name: kf-auth + namespace: demo +type: Opaque +``` + +Secrets provided by users are not managed by KubeDB, and therefore, won't be modified or garbage collected by the KubeDB operator (version 0.13.0 and higher). + +### spec.configSecret + +`spec.configSecret` is an optional field that points to a Secret used to hold custom Kafka configuration. If not set, KubeDB operator will use default configuration for Kafka. + +### spec.topology + +`spec.topology` represents the topology configuration for Kafka cluster in KRaft mode. + +When `spec.topology` is set, the following fields needs to be empty, otherwise validating webhook will throw error. + +- `spec.replicas` +- `spec.podTemplate` +- `spec.storage` + +#### spec.topology.broker + +`broker` represents configuration for brokers of Kafka. In KRaft Topology mode clustering each pod can act as a single dedicated Kafka broker. -# Druid +Available configurable fields: -[//]: # () -[//]: # (## What is PgBouncer) +- `topology.broker`: + - `replicas` (`: "1"`) - is an `optional` field to specify the number of nodes (ie. pods ) that act as the dedicated Kafka `broker` pods. Defaults to `1`. + - `suffix` (`: "broker"`) - is an `optional` field that is added as the suffix of the broker PetSet name. Defaults to `broker`. + - `storage` is a `required` field that specifies how much storage to claim for each of the `broker` pods. + - `resources` (`: "cpu: 500m, memory: 1Gi" `) - is an `optional` field that specifies how much computational resources to request or to limit for each of the `broker` pods. -[//]: # () -[//]: # (`PgBouncer` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [PgBouncer](https://www.pgbouncer.github.io/) in a Kubernetes native way. You only need to describe the desired configurations in a `PgBouncer` object, and the KubeDB operator will create Kubernetes resources in the desired state for you.) +#### spec.topology.controller -[//]: # () -[//]: # (## PgBouncer Spec) +`controller` represents configuration for controllers of Kafka. In KRaft Topology mode clustering each pod can act as a single dedicated Kafka controller that preserves metadata for the whole cluster and participated in leader election. -[//]: # () -[//]: # (Like any official Kubernetes resource, a `PgBouncer` object has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections.) +Available configurable fields: -[//]: # () -[//]: # (Below is an example PgBouncer object.) +- `topology.controller`: + - `replicas` (`: "1"`) - is an `optional` field to specify the number of nodes (ie. pods ) that act as the dedicated Kafka `controller` pods. Defaults to `1`. + - `suffix` (`: "controller"`) - is an `optional` field that is added as the suffix of the controller PetSet name. Defaults to `controller`. + - `storage` is a `required` field that specifies how much storage to claim for each of the `controller` pods. + - `resources` (`: "cpu: 500m, memory: 1Gi" `) - is an `optional` field that specifies how much computational resources to request or to limit for each of the `controller` pods. -[//]: # () -[//]: # (```yaml) +### spec.enableSSL -[//]: # (apiVersion: kubedb.com/v1alpha2) +`spec.enableSSL` is an `optional` field that specifies whether to enable TLS to HTTP layer. The default value of this field is `false`. -[//]: # (kind: PgBouncer) +```yaml +spec: + enableSSL: true +``` -[//]: # (metadata:) +### spec.tls -[//]: # ( name: pgbouncer-server) +`spec.tls` specifies the TLS/SSL configurations. The KubeDB operator supports TLS management by using the [cert-manager](https://cert-manager.io/). Currently, the operator only supports the `PKCS#8` encoded certificates. -[//]: # ( namespace: demo) +```yaml +spec: + tls: + issuerRef: + apiGroup: "cert-manager.io" + kind: Issuer + name: kf-issuer + certificates: + - alias: server + privateKey: + encoding: PKCS8 + secretName: kf-client-cert + subject: + organizations: + - kubedb + - alias: http + privateKey: + encoding: PKCS8 + secretName: kf-server-cert + subject: + organizations: + - kubedb +``` -[//]: # (spec:) +The `spec.tls` contains the following fields: -[//]: # ( version: "1.18.0") +- `tls.issuerRef` - is an `optional` field that references to the `Issuer` or `ClusterIssuer` custom resource object of [cert-manager](https://cert-manager.io/docs/concepts/issuer/). It is used to generate the necessary certificate secrets for Kafka. If the `issuerRef` is not specified, the operator creates a self-signed CA and also creates necessary certificate (valid: 365 days) secrets using that CA. + - `apiGroup` - is the group name of the resource that is being referenced. Currently, the only supported value is `cert-manager.io`. + - `kind` - is the type of resource that is being referenced. The supported values are `Issuer` and `ClusterIssuer`. + - `name` - is the name of the resource ( `Issuer` or `ClusterIssuer` ) that is being referenced. -[//]: # ( replicas: 2) +- `tls.certificates` - is an `optional` field that specifies a list of certificate configurations used to configure the certificates. It has the following fields: + - `alias` - represents the identifier of the certificate. It has the following possible value: + - `server` - is used for the server certificate configuration. + - `client` - is used for the client certificate configuration. -[//]: # ( databases:) + - `secretName` - ( `string` | `"-alias-cert"` ) - specifies the k8s secret name that holds the certificates. -[//]: # ( - alias: "postgres") + - `subject` - specifies an `X.509` distinguished name (DN). It has the following configurable fields: + - `organizations` ( `[]string` | `nil` ) - is a list of organization names. + - `organizationalUnits` ( `[]string` | `nil` ) - is a list of organization unit names. + - `countries` ( `[]string` | `nil` ) - is a list of country names (ie. Country Codes). + - `localities` ( `[]string` | `nil` ) - is a list of locality names. + - `provinces` ( `[]string` | `nil` ) - is a list of province names. + - `streetAddresses` ( `[]string` | `nil` ) - is a list of street addresses. + - `postalCodes` ( `[]string` | `nil` ) - is a list of postal codes. + - `serialNumber` ( `string` | `""` ) is a serial number. -[//]: # ( databaseName: "postgres") + For more details, visit [here](https://golang.org/pkg/crypto/x509/pkix/#Name). -[//]: # ( databaseRef:) + - `duration` ( `string` | `""` ) - is the period during which the certificate is valid. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as `"300m"`, `"1.5h"` or `"20h45m"`. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". + - `renewBefore` ( `string` | `""` ) - is a specifiable time before expiration duration. + - `dnsNames` ( `[]string` | `nil` ) - is a list of subject alt names. + - `ipAddresses` ( `[]string` | `nil` ) - is a list of IP addresses. + - `uris` ( `[]string` | `nil` ) - is a list of URI Subject Alternative Names. + - `emailAddresses` ( `[]string` | `nil` ) - is a list of email Subject Alternative Names. -[//]: # ( name: "quick-postgres") -[//]: # ( namespace: demo) +### spec.storageType -[//]: # ( connectionPool:) +`spec.storageType` is an optional field that specifies the type of storage to use for database. It can be either `Durable` or `Ephemeral`. The default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create Kafka cluster using [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) volume. -[//]: # ( maxClientConnections: 20) +### spec.storage -[//]: # ( reservePoolSize: 5) +If you set `spec.storageType:` to `Durable`, then `spec.storage` is a required field that specifies the StorageClass of PVCs dynamically allocated to store data for the database. This storage spec will be passed to the PetSet created by KubeDB operator to run database pods. You can specify any StorageClass available in your cluster with appropriate resource requests. -[//]: # ( monitor:) +- `spec.storage.storageClassName` is the name of the StorageClass used to provision PVCs. PVCs don’t necessarily have to request a class. A PVC with its storageClassName set equal to "" is always interpreted to be requesting a PV with no class, so it can only be bound to PVs with no class (no annotation or one set equal to ""). A PVC with no storageClassName is not quite the same and is treated differently by the cluster depending on whether the DefaultStorageClass admission plugin is turned on. +- `spec.storage.accessModes` uses the same conventions as Kubernetes PVCs when requesting storage with specific access modes. +- `spec.storage.resources` can be used to request specific quantities of storage. This follows the same resource model used by PVCs. -[//]: # ( agent: prometheus.io/operator) +To learn how to configure `spec.storage`, please visit the links below: -[//]: # ( prometheus:) +- https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims -[//]: # ( serviceMonitor:) +NB. If `spec.topology` is set, then `spec.storage` needs to be empty. Instead use `spec.topology..storage` -[//]: # ( labels:) +### spec.monitor -[//]: # ( release: prometheus) +Kafka managed by KubeDB can be monitored with Prometheus operator out-of-the-box. To learn more, +- [Monitor Apache Kafka with Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md) +- [Monitor Apache Kafka with Built-in Prometheus](/docs/guides/kafka/monitoring/using-builtin-prometheus.md) -[//]: # ( interval: 10s) +### spec.podTemplate -[//]: # (```) +KubeDB allows providing a template for database pod through `spec.podTemplate`. KubeDB operator will pass the information provided in `spec.podTemplate` to the PetSet created for Kafka cluster. -[//]: # () -[//]: # (### spec.version) +KubeDB accept following fields to set in `spec.podTemplate:` -[//]: # () -[//]: # (`spec.version` is a required field that specifies the name of the [PgBouncerVersion](/docs/guides/pgbouncer/concepts/catalog.md) crd where the docker images are specified. Currently, when you install KubeDB, it creates the following `PgBouncerVersion` resources,) +- metadata: + - annotations (pod's annotation) + - labels (pod's labels) +- controller: + - annotations (petset's annotation) + - labels (petset's labels) +- spec: + - containers + - volumes + - podPlacementPolicy + - initContainers + - containers + - imagePullSecrets + - nodeSelector + - affinity + - serviceAccountName + - schedulerName + - tolerations + - priorityClassName + - priority + - securityContext + - livenessProbe + - readinessProbe + - lifecycle -[//]: # () -[//]: # (- `1.18.0`) +You can check out the full list [here](https://github.com/kmodules/offshoot-api/blob/master/api/v2/types.go#L26C1-L279C1). +Uses of some field of `spec.podTemplate` is described below, -[//]: # () -[//]: # (### spec.replicas) +NB. If `spec.topology` is set, then `spec.podTemplate` needs to be empty. Instead use `spec.topology..podTemplate` -[//]: # () -[//]: # (`spec.replicas` specifies the total number of available pgbouncer server nodes for each crd. KubeDB uses `PodDisruptionBudget` to ensure that majority of the replicas are available during [voluntary disruptions](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#voluntary-and-involuntary-disruptions).) +#### spec.podTemplate.spec.tolerations -[//]: # () -[//]: # (### spec.databases) +The `spec.podTemplate.spec.tolerations` is an optional field. This can be used to specify the pod's tolerations. -[//]: # () -[//]: # (`spec.databases` specifies an array of postgres databases that pgbouncer should add to its connection pool. It contains three `required` fields and two `optional` fields for each database connection.) +#### spec.podTemplate.spec.volumes -[//]: # () -[//]: # (- `spec.databases.alias`: specifies an alias for the target database located in a postgres server specified by an appbinding.) +The `spec.podTemplate.spec.volumes` is an optional field. This can be used to provide the list of volumes that can be mounted by containers belonging to the pod. -[//]: # (- `spec.databases.databaseName`: specifies the name of the target database.) +#### spec.podTemplate.spec.podPlacementPolicy -[//]: # (- `spec.databases.databaseRef`: specifies the name and namespace of the AppBinding that contains the path to a PostgreSQL server where the target database can be found.) +`spec.podTemplate.spec.podPlacementPolicy` is an optional field. This can be used to provide the reference of the podPlacementPolicy. This will be used by our Petset controller to place the db pods throughout the region, zone & nodes according to the policy. It utilizes kubernetes affinity & podTopologySpreadContraints feature to do so. -[//]: # () -[//]: # (ConnectionPool is used to configure pgbouncer connection-pool. All the fields here are accompanied by default values and can be left unspecified if no customisation is required by the user.) +#### spec.podTemplate.spec.nodeSelector -[//]: # () -[//]: # (- `spec.connectionPool.port`: specifies the port on which pgbouncer should listen to connect with clients. The default is 5432.) +`spec.podTemplate.spec.nodeSelector` is an optional field that specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). To learn more, see [here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) . -[//]: # () -[//]: # (- `spec.connectionPool.poolMode`: specifies the value of pool_mode. Specifies when a server connection can be reused by other clients.) +### spec.serviceTemplates -[//]: # () -[//]: # ( - session) +You can also provide template for the services created by KubeDB operator for Kafka cluster through `spec.serviceTemplates`. This will allow you to set the type and other properties of the services. -[//]: # () -[//]: # ( Server is released back to pool after client disconnects. Default.) +KubeDB allows following fields to set in `spec.serviceTemplates`: +- `alias` represents the identifier of the service. It has the following possible value: + - `stats` is used for the exporter service identification. +- metadata: + - labels + - annotations +- spec: + - type + - ports + - clusterIP + - externalIPs + - loadBalancerIP + - loadBalancerSourceRanges + - externalTrafficPolicy + - healthCheckNodePort + - sessionAffinityConfig -[//]: # () -[//]: # ( - transaction) +See [here](https://github.com/kmodules/offshoot-api/blob/kubernetes-1.21.1/api/v1/types.go#L237) to understand these fields in detail. -[//]: # () -[//]: # ( Server is released back to pool after transaction finishes.) -[//]: # () -[//]: # ( - statement) +#### spec.podTemplate.spec.containers -[//]: # () -[//]: # ( Server is released back to pool after query finishes. Long transactions spanning multiple statements are disallowed in this mode.) +The `spec.podTemplate.spec.containers` can be used to provide the list containers and their configurations for to the database pod. some of the fields are described below, -[//]: # () -[//]: # (- `spec.connectionPool.maxClientConnections`: specifies the value of max_client_conn. When increased then the file descriptor limits should also be increased. Note that actual number of file descriptors used is more than max_client_conn. Theoretical maximum used is:) +##### spec.podTemplate.spec.containers[].name +The `spec.podTemplate.spec.containers[].name` field used to specify the name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. -[//]: # () -[//]: # ( ```bash) +##### spec.podTemplate.spec.containers[].args +`spec.podTemplate.spec.containers[].args` is an optional field. This can be used to provide additional arguments to database installation. -[//]: # ( max_client_conn + (max pool_size * total databases * total users)) +##### spec.podTemplate.spec.containers[].env -[//]: # ( ```) +`spec.podTemplate.spec.containers[].env` is an optional field that specifies the environment variables to pass to the Redis containers. -[//]: # () -[//]: # ( if each user connects under its own username to server. If a database user is specified in connect string (all users connect under same username), the theoretical maximum is:) +##### spec.podTemplate.spec.containers[].resources -[//]: # () -[//]: # ( ```bash) +`spec.podTemplate.spec.containers[].resources` is an optional field. This can be used to request compute resources required by containers of the database pods. To learn more, visit [here](http://kubernetes.io/docs/user-guide/compute-resources/). -[//]: # ( max_client_conn + (max pool_size * total databases)) +### spec.deletionPolicy -[//]: # ( ```) +`deletionPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `Kafka` crd or which resources KubeDB should keep or delete when you delete `Kafka` crd. KubeDB provides following four deletion policies: -[//]: # () -[//]: # ( The theoretical maximum should be never reached, unless somebody deliberately crafts special load for it. Still, it means you should set the number of file descriptors to a safely high number.) +- DoNotTerminate +- WipeOut +- Halt +- Delete -[//]: # () -[//]: # ( Search for `ulimit` in your favorite shell man page. Note: `ulimit` does not apply in a Windows environment.) +When `deletionPolicy` is `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` feature. If admission webhook is enabled, `DoNotTerminate` prevents users from deleting the database as long as the `spec.deletionPolicy` is set to `DoNotTerminate`. -[//]: # () -[//]: # ( Default: 100) +## spec.healthChecker +It defines the attributes for the health checker. +- `spec.healthChecker.periodSeconds` specifies how often to perform the health check. +- `spec.healthChecker.timeoutSeconds` specifies the number of seconds after which the probe times out. +- `spec.healthChecker.failureThreshold` specifies minimum consecutive failures for the healthChecker to be considered failed. +- `spec.healthChecker.disableWriteCheck` specifies whether to disable the writeCheck or not. -[//]: # () -[//]: # (- `spec.connectionPool.defaultPoolSize`: specifies the value of default_pool_size. Used to determine how many server connections to allow per user/database pair. Can be overridden in the per-database configuration.) +Know details about KubeDB Health checking from this [blog post](https://appscode.com/blog/post/kubedb-health-checker/). -[//]: # () -[//]: # ( Default: 20) +## Next Steps -[//]: # () -[//]: # (- `spec.connectionPool.minPoolSize`: specifies the value of min_pool_size. PgBouncer adds more server connections to pool if below this number. Improves behavior when usual load comes suddenly back after period of total inactivity.) - -[//]: # () -[//]: # ( Default: 0 (disabled)) - -[//]: # () -[//]: # (- `spec.connectionPool.reservePoolSize`: specifies the value of reserve_pool_size. Used to determine how many additional connections to allow to a pool. 0 disables.) - -[//]: # () -[//]: # ( Default: 0 (disabled)) - -[//]: # () -[//]: # (- `spec.connectionPool.reservePoolTimeout`: specifies the value of reserve_pool_timeout. If a client has not been serviced in this many seconds, pgbouncer enables use of additional connections from reserve pool. 0 disables.) - -[//]: # () -[//]: # ( Default: 5.0) - -[//]: # () -[//]: # (- `spec.connectionPool.maxDbConnections`: specifies the value of max_db_connections. PgBouncer does not allow more than this many connections per-database (regardless of pool - i.e. user). It should be noted that when you hit the limit, closing a client connection to one pool will not immediately allow a server connection to be established for another pool, because the server connection for the first pool is still open. Once the server connection closes (due to idle timeout), a new server connection will immediately be opened for the waiting pool.) - -[//]: # () -[//]: # ( Default: unlimited) - -[//]: # () -[//]: # (- `spec.connectionPool.maxUserConnections`: specifies the value of max_user_connections. PgBouncer does not allow more than this many connections per-user (regardless of pool - i.e. user). It should be noted that when you hit the limit, closing a client connection to one pool will not immediately allow a server connection to be established for another pool, because the server connection for the first pool is still open. Once the server connection closes (due to idle timeout), a new server connection will immediately be opened for the waiting pool.) - -[//]: # ( Default: unlimited) - -[//]: # () -[//]: # (- `spec.connectionPool.statsPeriod`: sets how often the averages shown in various `SHOW` commands are updated and how often aggregated statistics are written to the log.) - -[//]: # ( Default: 60) - -[//]: # () -[//]: # (- `spec.connectionPool.authType`: specifies how to authenticate users. PgBouncer supports several authentication methods including pam, md5, scram-sha-256, trust , or any. However hba, and cert are not supported.) - -[//]: # () -[//]: # (- `spec.connectionPool.IgnoreStartupParameters`: specifies comma-separated startup parameters that pgbouncer knows are handled by admin and it can ignore them.) - -[//]: # () -[//]: # (### spec.monitor) - -[//]: # () -[//]: # (PgBouncer managed by KubeDB can be monitored with builtin-Prometheus and Prometheus operator out-of-the-box. To learn more,) - -[//]: # () -[//]: # (- [Monitor PgBouncer with builtin Prometheus](/docs/guides/pgbouncer/monitoring/using-builtin-prometheus.md)) - -[//]: # (- [Monitor PgBouncer with Prometheus operator](/docs/guides/pgbouncer/monitoring/using-prometheus-operator.md)) - -[//]: # () -[//]: # (### spec.podTemplate) - -[//]: # () -[//]: # (KubeDB allows providing a template for pgbouncer pods through `spec.podTemplate`. KubeDB operator will pass the information provided in `spec.podTemplate` to the PetSet created for PgBouncer server) - -[//]: # () -[//]: # (KubeDB accept following fields to set in `spec.podTemplate:`) - -[//]: # () -[//]: # (- metadata) - -[//]: # ( - annotations (pod's annotation)) - -[//]: # (- controller) - -[//]: # ( - annotations (petset's annotation)) - -[//]: # (- spec:) - -[//]: # ( - env) - -[//]: # ( - resources) - -[//]: # ( - initContainers) - -[//]: # ( - imagePullSecrets) - -[//]: # ( - affinity) - -[//]: # ( - tolerations) - -[//]: # ( - priorityClassName) - -[//]: # ( - priority) - -[//]: # ( - lifecycle) - -[//]: # () -[//]: # (Usage of some fields in `spec.podTemplate` is described below,) - -[//]: # () -[//]: # (#### spec.podTemplate.spec.env) - -[//]: # () -[//]: # (`spec.podTemplate.spec.env` is an optional field that specifies the environment variables to pass to the PgBouncer docker image. To know about supported environment variables, please visit [here](https://hub.docker.com/kubedb/pgbouncer/).) - -[//]: # () -[//]: # (Also, note that KubeDB does not allow updates to the environment variables as updating them does not have any effect once the server is created. If you try to update environment variables, KubeDB operator will reject the request with following error,) - -[//]: # () -[//]: # (```ini) - -[//]: # (Error from server (BadRequest): error when applying patch:) - -[//]: # (...) - -[//]: # (for: "./pgbouncer.yaml": admission webhook "pgbouncer.validators.kubedb.com" denied the request: precondition failed for:) - -[//]: # (...) - -[//]: # (At least one of the following was changed:) - -[//]: # ( apiVersion) - -[//]: # ( kind) - -[//]: # ( name) - -[//]: # ( namespace) - -[//]: # ( spec.podTemplate.spec.nodeSelector) - -[//]: # (```) - -[//]: # () -[//]: # (#### spec.podTemplate.spec.imagePullSecrets) - -[//]: # () -[//]: # (`spec.podTemplate.spec.imagePullSecrets` is an optional field that points to secrets to be used for pulling docker image if you are using a private docker registry. For more details on how to use private docker registry, please visit [here](/docs/guides/pgbouncer/private-registry/using-private-registry.md).) - -[//]: # () -[//]: # (#### spec.podTemplate.spec.nodeSelector) - -[//]: # () -[//]: # (`spec.podTemplate.spec.nodeSelector` is an optional field that specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). To learn more, see [here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) .) - -[//]: # () -[//]: # (#### spec.podTemplate.spec.resources) - -[//]: # () -[//]: # (`spec.podTemplate.spec.resources` is an optional field. This can be used to request compute resources required by the database pods. To learn more, visit [here](http://kubernetes.io/docs/user-guide/compute-resources/).) - -[//]: # () -[//]: # (### spec.serviceTemplate) - -[//]: # () -[//]: # (KubeDB creates a service for each PgBouncer instance. The service has the same name as the `pgbouncer.name` and points to pgbouncer pods.) - -[//]: # () -[//]: # (You can provide template for this service using `spec.serviceTemplate`. This will allow you to set the type and other properties of the service. If `spec.serviceTemplate` is not provided, KubeDB will create a service of type `ClusterIP` with minimal settings.) - -[//]: # () -[//]: # (KubeDB allows the following fields to set in `spec.serviceTemplate`:) - -[//]: # () -[//]: # (- metadata:) - -[//]: # ( - annotations) - -[//]: # (- spec:) - -[//]: # ( - type) - -[//]: # ( - ports) - -[//]: # ( - clusterIP) - -[//]: # ( - externalIPs) - -[//]: # ( - loadBalancerIP) - -[//]: # ( - loadBalancerSourceRanges) - -[//]: # ( - externalTrafficPolicy) - -[//]: # ( - healthCheckNodePort) - -[//]: # ( - sessionAffinityConfig) - -[//]: # () -[//]: # (See [here](https://github.com/kmodules/offshoot-api/blob/kubernetes-1.16.3/api/v1/types.go#L163) to understand these fields in detail.) - -[//]: # () -[//]: # (## Next Steps) - -[//]: # () -[//]: # (- Learn how to use KubeDB to run a PostgreSQL database [here](/docs/guides/postgres/README.md).) - -[//]: # (- Learn how to how to get started with PgBouncer [here](/docs/guides/pgbouncer/quickstart/quickstart.md).) - -[//]: # (- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).) +- Learn how to use KubeDB to run Apache Kafka cluster [here](/docs/guides/kafka/README.md). +- Deploy [dedicated topology cluster](/docs/guides/kafka/clustering/topology-cluster/index.md) for Apache Kafka +- Deploy [combined cluster](/docs/guides/kafka/clustering/combined-cluster/index.md) for Apache Kafka +- Monitor your Kafka cluster with KubeDB using [`out-of-the-box` Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). +- Detail concepts of [KafkaVersion object](/docs/guides/kafka/concepts/kafkaversion.md). +- Learn to use KubeDB managed Kafka objects using [CLIs](/docs/guides/kafka/cli/cli.md). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/druid/concepts/druidautoscaler.md b/docs/guides/druid/concepts/druidautoscaler.md new file mode 100644 index 0000000000..dec795f132 --- /dev/null +++ b/docs/guides/druid/concepts/druidautoscaler.md @@ -0,0 +1,164 @@ +--- +title: AppBinding CRD +menu: + docs_{{ .version }}: + identifier: guides-druid-concepts-druidautoscaler + name: AppBinding + parent: guides-druid-concepts + weight: 50 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# KafkaAutoscaler + +## What is KafkaAutoscaler + +`KafkaAutoscaler` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration for autoscaling [Kafka](https://kafka.apache.org/) compute resources and storage of database components in a Kubernetes native way. + +## KafkaAutoscaler CRD Specifications + +Like any official Kubernetes resource, a `KafkaAutoscaler` has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections. + +Here, some sample `KafkaAutoscaler` CROs for autoscaling different components of database is given below: + +**Sample `KafkaAutoscaler` for combined cluster:** + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: KafkaAutoscaler +metadata: + name: kf-autoscaler-combined + namespace: demo +spec: + databaseRef: + name: kafka-dev + opsRequestOptions: + timeout: 3m + apply: IfReady + compute: + node: + trigger: "On" + podLifeTimeThreshold: 24h + minAllowed: + cpu: 250m + memory: 350Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" + resourceDiffPercentage: 10 + storage: + node: + expansionMode: "Online" + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 +``` + +**Sample `KafkaAutoscaler` for topology cluster:** + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: KafkaAutoscaler +metadata: + name: kf-autoscaler-topology + namespace: demo +spec: + databaseRef: + name: kafka-prod + opsRequestOptions: + timeout: 3m + apply: IfReady + compute: + broker: + trigger: "On" + podLifeTimeThreshold: 24h + minAllowed: + cpu: 200m + memory: 300Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" + resourceDiffPercentage: 10 + controller: + trigger: "On" + podLifeTimeThreshold: 24h + minAllowed: + cpu: 200m + memory: 300Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" + resourceDiffPercentage: 10 + storage: + broker: + expansionMode: "Online" + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 + controller: + expansionMode: "Online" + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 +``` + +Here, we are going to describe the various sections of a `KafkaAutoscaler` crd. + +A `KafkaAutoscaler` object has the following fields in the `spec` section. + +### spec.databaseRef + +`spec.databaseRef` is a required field that point to the [Kafka](/docs/guides/kafka/concepts/kafka.md) object for which the autoscaling will be performed. This field consists of the following sub-field: + +- **spec.databaseRef.name :** specifies the name of the [Kafka](/docs/guides/kafka/concepts/kafka.md) object. + +### spec.opsRequestOptions +These are the options to pass in the internally created opsRequest CRO. `opsRequestOptions` has two fields. + +### spec.compute + +`spec.compute` specifies the autoscaling configuration for the compute resources i.e. cpu and memory of the database components. This field consists of the following sub-field: + +- `spec.compute.node` indicates the desired compute autoscaling configuration for a combined Kafka cluster. +- `spec.compute.broker` indicates the desired compute autoscaling configuration for broker of a topology Kafka database. +- `spec.compute.controller` indicates the desired compute autoscaling configuration for controller of a topology Kafka database. + + +All of them has the following sub-fields: + +- `trigger` indicates if compute autoscaling is enabled for this component of the database. If "On" then compute autoscaling is enabled. If "Off" then compute autoscaling is disabled. +- `minAllowed` specifies the minimal amount of resources that will be recommended, default is no minimum. +- `maxAllowed` specifies the maximum amount of resources that will be recommended, default is no maximum. +- `controlledResources` specifies which type of compute resources (cpu and memory) are allowed for autoscaling. Allowed values are "cpu" and "memory". +- `containerControlledValues` specifies which resource values should be controlled. Allowed values are "RequestsAndLimits" and "RequestsOnly". +- `resourceDiffPercentage` specifies the minimum resource difference between recommended value and the current value in percentage. If the difference percentage is greater than this value than autoscaling will be triggered. +- `podLifeTimeThreshold` specifies the minimum pod lifetime of at least one of the pods before triggering autoscaling. + +There are two more fields, those are only specifiable for the percona variant inMemory databases. +- `inMemoryStorage.UsageThresholdPercentage` If db uses more than usageThresholdPercentage of the total memory, memoryStorage should be increased. +- `inMemoryStorage.ScalingFactorPercentage` If db uses more than usageThresholdPercentage of the total memory, memoryStorage should be increased by this given scaling percentage. + +### spec.storage + +`spec.compute` specifies the autoscaling configuration for the storage resources of the database components. This field consists of the following sub-field: + +- `spec.compute.node` indicates the desired storage autoscaling configuration for a combined Kafka cluster. +- `spec.compute.broker` indicates the desired storage autoscaling configuration for broker of a combined Kafka cluster. +- `spec.compute.controller` indicates the desired storage autoscaling configuration for controller of a topology Kafka cluster. + + +All of them has the following sub-fields: + +- `trigger` indicates if storage autoscaling is enabled for this component of the database. If "On" then storage autoscaling is enabled. If "Off" then storage autoscaling is disabled. +- `usageThreshold` indicates usage percentage threshold, if the current storage usage exceeds then storage autoscaling will be triggered. +- `scalingThreshold` indicates the percentage of the current storage that will be scaled. +- `expansionMode` indicates the volume expansion mode. diff --git a/docs/guides/druid/concepts/druidopsrequest.md b/docs/guides/druid/concepts/druidopsrequest.md new file mode 100644 index 0000000000..3015e86959 --- /dev/null +++ b/docs/guides/druid/concepts/druidopsrequest.md @@ -0,0 +1,622 @@ +--- +title: AppBinding CRD +menu: + docs_{{ .version }}: + identifier: guides-druid-concepts-druidopsrequest + name: AppBinding + parent: guides-druid-concepts + weight: 40 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + + +> New to KubeDB? Please start [here](/docs/README.md). + +# KafkaOpsRequest + +## What is KafkaOpsRequest + +`KafkaOpsRequest` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration for [Kafka](https://kafka.apache.org/) administrative operations like database version updating, horizontal scaling, vertical scaling etc. in a Kubernetes native way. + +## KafkaOpsRequest CRD Specifications + +Like any official Kubernetes resource, a `KafkaOpsRequest` has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections. + +Here, some sample `KafkaOpsRequest` CRs for different administrative operations is given below: + +**Sample `KafkaOpsRequest` for updating database:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: update-version + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: kafka-prod + updateVersion: + targetVersion: 3.6.1 +status: + conditions: + - lastTransitionTime: "2024-07-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `KafkaOpsRequest` Objects for Horizontal Scaling of different component of the database:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-hscale-combined + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: kafka-dev + horizontalScaling: + node: 3 +status: + conditions: + - lastTransitionTime: "2024-07-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-hscale-down-topology + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: kafka-prod + horizontalScaling: + topology: + broker: 2 + controller: 2 +status: + conditions: + - lastTransitionTime: "2024-07-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `KafkaOpsRequest` Objects for Vertical Scaling of different component of the database:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-vscale-combined + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: kafka-dev + verticalScaling: + node: + resources: + requests: + memory: "1.5Gi" + cpu: "0.7" + limits: + memory: "2Gi" + cpu: "1" +status: + conditions: + - lastTransitionTime: "2024-07-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-vscale-topology + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: kafka-prod + verticalScaling: + broker: + resources: + requests: + memory: "1.5Gi" + cpu: "0.7" + limits: + memory: "2Gi" + cpu: "1" + controller: + resources: + requests: + memory: "1.5Gi" + cpu: "0.7" + limits: + memory: "2Gi" + cpu: "1" +status: + conditions: + - lastTransitionTime: "2024-07-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `KafkaOpsRequest` Objects for Reconfiguring different kafka mode:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-reconfiugre-combined + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: kafka-dev + configuration: + applyConfig: + server.properties: | + log.retention.hours=100 + default.replication.factor=2 +status: + conditions: + - lastTransitionTime: "2024-07-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-reconfiugre-topology + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: kafka-prod + configuration: + applyConfig: + broker.properties: | + log.retention.hours=100 + default.replication.factor=2 + controller.properties: | + metadata.log.dir=/var/log/kafka/metadata-custom +status: + conditions: + - lastTransitionTime: "2024-07-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-reconfiugre-combined + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: kafka-dev + configuration: + configSecret: + name: new-configsecret-combined +status: + conditions: + - lastTransitionTime: "2024-07-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-reconfiugre-topology + namespace: demo +spec: + type: Reconfigure + databaseRef: + name: kafka-prod + configuration: + configSecret: + name: new-configsecret-topology +status: + conditions: + - lastTransitionTime: "2024-07-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `KafkaOpsRequest` Objects for Volume Expansion of different database components:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-volume-exp-combined + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: kafka-dev + volumeExpansion: + mode: "Online" + node: 2Gi +status: + conditions: + - lastTransitionTime: "2024-07-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-volume-exp-topology + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: kafka-prod + volumeExpansion: + mode: "Online" + broker: 2Gi + controller: 2Gi +status: + conditions: + - lastTransitionTime: "2024-07-25T18:22:38Z" + message: Successfully completed the modification process + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful +``` + +**Sample `KafkaOpsRequest` Objects for Reconfiguring TLS of the database:** + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-add-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: kafka-prod + tls: + issuerRef: + name: kf-issuer + kind: Issuer + apiGroup: "cert-manager.io" + certificates: + - alias: client + emailAddresses: + - abc@appscode.com +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-rotate + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: kafka-dev + tls: + rotateCertificates: true +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-change-issuer + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: kafka-prod + tls: + issuerRef: + name: kf-new-issuer + kind: Issuer + apiGroup: "cert-manager.io" +``` + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kfops-remove + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: kafka-prod + tls: + remove: true +``` + +Here, we are going to describe the various sections of a `KafkaOpsRequest` crd. + +A `KafkaOpsRequest` object has the following fields in the `spec` section. + +### spec.databaseRef + +`spec.databaseRef` is a required field that point to the [Kafka](/docs/guides/kafka/concepts/kafka.md) object for which the administrative operations will be performed. This field consists of the following sub-field: + +- **spec.databaseRef.name :** specifies the name of the [Kafka](/docs/guides/kafka/concepts/kafka.md) object. + +### spec.type + +`spec.type` specifies the kind of operation that will be applied to the database. Currently, the following types of operations are allowed in `KafkaOpsRequest`. + +- `UpdateVersion` +- `HorizontalScaling` +- `VerticalScaling` +- `VolumeExpansion` +- `Reconfigure` +- `ReconfigureTLS` +- `Restart` + +> You can perform only one type of operation on a single `KafkaOpsRequest` CR. For example, if you want to update your database and scale up its replica then you have to create two separate `KafkaOpsRequest`. At first, you have to create a `KafkaOpsRequest` for updating. Once it is completed, then you can create another `KafkaOpsRequest` for scaling. + +### spec.updateVersion + +If you want to update you Kafka version, you have to specify the `spec.updateVersion` section that specifies the desired version information. This field consists of the following sub-field: + +- `spec.updateVersion.targetVersion` refers to a [KafkaVersion](/docs/guides/kafka/concepts/kafkaversion.md) CR that contains the Kafka version information where you want to update. + +> You can only update between Kafka versions. KubeDB does not support downgrade for Kafka. + +### spec.horizontalScaling + +If you want to scale-up or scale-down your Kafka cluster or different components of it, you have to specify `spec.horizontalScaling` section. This field consists of the following sub-field: + +- `spec.horizontalScaling.node` indicates the desired number of nodes for Kafka combined cluster after scaling. For example, if your cluster currently has 4 replica with combined node, and you want to add additional 2 nodes then you have to specify 6 in `spec.horizontalScaling.node` field. Similarly, if you want to remove one node from the cluster, you have to specify 3 in `spec.horizontalScaling.node` field. +- `spec.horizontalScaling.topology` indicates the configuration of topology nodes for Kafka topology cluster after scaling. This field consists of the following sub-field: + - `spec.horizontalScaling.topoloy.broker` indicates the desired number of broker nodes for Kafka topology cluster after scaling. + - `spec.horizontalScaling.topology.controller` indicates the desired number of controller nodes for Kafka topology cluster after scaling. + +> If the reference kafka object is combined cluster, then you can only specify `spec.horizontalScaling.node` field. If the reference kafka object is topology cluster, then you can only specify `spec.horizontalScaling.topology` field. You can not specify both fields at the same time. + +### spec.verticalScaling + +`spec.verticalScaling` is a required field specifying the information of `Kafka` resources like `cpu`, `memory` etc that will be scaled. This field consists of the following sub-fields: + +- `spec.verticalScaling.node` indicates the desired resources for combined Kafka cluster after scaling. +- `spec.verticalScaling.broker` indicates the desired resources for broker of Kafka topology cluster after scaling. +- `spec.verticalScaling.controller` indicates the desired resources for controller of Kafka topology cluster after scaling. + +> If the reference kafka object is combined cluster, then you can only specify `spec.verticalScaling.node` field. If the reference kafka object is topology cluster, then you can only specify `spec.verticalScaling.broker` or `spec.verticalScaling.controller` or both fields. You can not specify `spec.verticalScaling.node` field with any other fields at the same time, but you can specify `spec.verticalScaling.broker` and `spec.verticalScaling.controller` fields at the same time. + +All of them has the below structure: + +```yaml +requests: + memory: "200Mi" + cpu: "0.1" +limits: + memory: "300Mi" + cpu: "0.2" +``` + +Here, when you specify the resource request, the scheduler uses this information to decide which node to place the container of the Pod on and when you specify a resource limit for the container, the `kubelet` enforces those limits so that the running container is not allowed to use more of that resource than the limit you set. You can found more details from [here](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/). + +### spec.volumeExpansion + +> To use the volume expansion feature the storage class must support volume expansion + +If you want to expand the volume of your Kafka cluster or different components of it, you have to specify `spec.volumeExpansion` section. This field consists of the following sub-field: + +- `spec.mode` specifies the volume expansion mode. Supported values are `Online` & `Offline`. The default is `Online`. +- `spec.volumeExpansion.node` indicates the desired size for the persistent volume of a combined Kafka cluster. +- `spec.volumeExpansion.broker` indicates the desired size for the persistent volume for broker of a Kafka topology cluster. +- `spec.volumeExpansion.controller` indicates the desired size for the persistent volume for controller of a Kafka topology cluster. + +> If the reference kafka object is combined cluster, then you can only specify `spec.volumeExpansion.node` field. If the reference kafka object is topology cluster, then you can only specify `spec.volumeExpansion.broker` or `spec.volumeExpansion.controller` or both fields. You can not specify `spec.volumeExpansion.node` field with any other fields at the same time, but you can specify `spec.volumeExpansion.broker` and `spec.volumeExpansion.controller` fields at the same time. + +All of them refer to [Quantity](https://v1-22.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#quantity-resource-core) types of Kubernetes. + +Example usage of this field is given below: + +```yaml +spec: + volumeExpansion: + node: "2Gi" +``` + +This will expand the volume size of all the combined nodes to 2 GB. + +### spec.configuration + +If you want to reconfigure your Running Kafka cluster or different components of it with new custom configuration, you have to specify `spec.configuration` section. This field consists of the following sub-field: + +- `spec.configuration.configSecret` points to a secret in the same namespace of a Kafka resource, which contains the new custom configurations. If there are any configSecret set before in the database, this secret will replace it. The value of the field `spec.stringData` of the secret like below: +```yaml +server.properties: | + default.replication.factor=3 + offsets.topic.replication.factor=3 + log.retention.hours=100 +broker.properties: | + default.replication.factor=3 + offsets.topic.replication.factor=3 + log.retention.hours=100 +controller.properties: | + default.replication.factor=3 + offsets.topic.replication.factor=3 + log.retention.hours=100 +``` +> If you want to reconfigure a combined Kafka cluster, then you can only specify `server.properties` field. If you want to reconfigure a topology Kafka cluster, then you can specify `broker.properties` or `controller.properties` or both fields. You can not specify `server.properties` field with any other fields at the same time, but you can specify `broker.properties` and `controller.properties` fields at the same time. + +- `applyConfig` contains the new custom config as a string which will be merged with the previous configuration. + +- `applyConfig` is a map where key supports 3 values, namely `server.properties`, `broker.properties`, `controller.properties`. And value represents the corresponding configurations. + +```yaml + applyConfig: + server.properties: | + default.replication.factor=3 + offsets.topic.replication.factor=3 + log.retention.hours=100 + broker.properties: | + default.replication.factor=3 + offsets.topic.replication.factor=3 + log.retention.hours=100 + controller.properties: | + metadata.log.dir=/var/log/kafka/metadata-custom +``` + +- `removeCustomConfig` is a boolean field. Specify this field to true if you want to remove all the custom configuration from the deployed kafka cluster. + +### spec.tls + +If you want to reconfigure the TLS configuration of your Kafka i.e. add TLS, remove TLS, update issuer/cluster issuer or Certificates and rotate the certificates, you have to specify `spec.tls` section. This field consists of the following sub-field: + +- `spec.tls.issuerRef` specifies the issuer name, kind and api group. +- `spec.tls.certificates` specifies the certificates. You can learn more about this field from [here](/docs/guides/kafka/concepts/kafka.md#spectls). +- `spec.tls.rotateCertificates` specifies that we want to rotate the certificate of this kafka. +- `spec.tls.remove` specifies that we want to remove tls from this kafka. + +### spec.timeout +As we internally retry the ops request steps multiple times, This `timeout` field helps the users to specify the timeout for those steps of the ops request (in second). +If a step doesn't finish within the specified timeout, the ops request will result in failure. + +### spec.apply +This field controls the execution of obsRequest depending on the database state. It has two supported values: `Always` & `IfReady`. +Use IfReady, if you want to process the opsRequest only when the database is Ready. And use Always, if you want to process the execution of opsReq irrespective of the Database state. + +### KafkaOpsRequest `Status` + +`.status` describes the current state and progress of a `KafkaOpsRequest` operation. It has the following fields: + +### status.phase + +`status.phase` indicates the overall phase of the operation for this `KafkaOpsRequest`. It can have the following three values: + +| Phase | Meaning | +|-------------|----------------------------------------------------------------------------------| +| Successful | KubeDB has successfully performed the operation requested in the KafkaOpsRequest | +| Progressing | KubeDB has started the execution of the applied KafkaOpsRequest | +| Failed | KubeDB has failed the operation requested in the KafkaOpsRequest | +| Denied | KubeDB has denied the operation requested in the KafkaOpsRequest | +| Skipped | KubeDB has skipped the operation requested in the KafkaOpsRequest | + +Important: Ops-manager Operator can skip an opsRequest, only if its execution has not been started yet & there is a newer opsRequest applied in the cluster. `spec.type` has to be same as the skipped one, in this case. + +### status.observedGeneration + +`status.observedGeneration` shows the most recent generation observed by the `KafkaOpsRequest` controller. + +### status.conditions + +`status.conditions` is an array that specifies the conditions of different steps of `KafkaOpsRequest` processing. Each condition entry has the following fields: + +- `types` specifies the type of the condition. KafkaOpsRequest has the following types of conditions: + +| Type | Meaning | +|-------------------------------|---------------------------------------------------------------------------| +| `Progressing` | Specifies that the operation is now in the progressing state | +| `Successful` | Specifies such a state that the operation on the database was successful. | +| `HaltDatabase` | Specifies such a state that the database is halted by the operator | +| `ResumeDatabase` | Specifies such a state that the database is resumed by the operator | +| `Failed` | Specifies such a state that the operation on the database failed. | +| `StartingBalancer` | Specifies such a state that the balancer has successfully started | +| `StoppingBalancer` | Specifies such a state that the balancer has successfully stopped | +| `UpdateShardImage` | Specifies such a state that the Shard Images has been updated | +| `UpdateReplicaSetImage` | Specifies such a state that the Replicaset Image has been updated | +| `UpdateConfigServerImage` | Specifies such a state that the ConfigServer Image has been updated | +| `UpdateMongosImage` | Specifies such a state that the Mongos Image has been updated | +| `UpdatePetSetResources` | Specifies such a state that the Petset resources has been updated | +| `UpdateShardResources` | Specifies such a state that the Shard resources has been updated | +| `UpdateReplicaSetResources` | Specifies such a state that the Replicaset resources has been updated | +| `UpdateConfigServerResources` | Specifies such a state that the ConfigServer resources has been updated | +| `UpdateMongosResources` | Specifies such a state that the Mongos resources has been updated | +| `ScaleDownReplicaSet` | Specifies such a state that the scale down operation of replicaset | +| `ScaleUpReplicaSet` | Specifies such a state that the scale up operation of replicaset | +| `ScaleUpShardReplicas` | Specifies such a state that the scale up operation of shard replicas | +| `ScaleDownShardReplicas` | Specifies such a state that the scale down operation of shard replicas | +| `ScaleDownConfigServer` | Specifies such a state that the scale down operation of config server | +| `ScaleUpConfigServer` | Specifies such a state that the scale up operation of config server | +| `ScaleMongos` | Specifies such a state that the scale down operation of replicaset | +| `VolumeExpansion` | Specifies such a state that the volume expansion operaton of the database | +| `ReconfigureReplicaset` | Specifies such a state that the reconfiguration of replicaset nodes | +| `ReconfigureMongos` | Specifies such a state that the reconfiguration of mongos nodes | +| `ReconfigureShard` | Specifies such a state that the reconfiguration of shard nodes | +| `ReconfigureConfigServer` | Specifies such a state that the reconfiguration of config server nodes | + +- The `status` field is a string, with possible values `True`, `False`, and `Unknown`. + - `status` will be `True` if the current transition succeeded. + - `status` will be `False` if the current transition failed. + - `status` will be `Unknown` if the current transition was denied. +- The `message` field is a human-readable message indicating details about the condition. +- The `reason` field is a unique, one-word, CamelCase reason for the condition's last transition. +- The `lastTransitionTime` field provides a timestamp for when the operation last transitioned from one state to another. +- The `observedGeneration` shows the most recent condition transition generation observed by the controller. diff --git a/docs/guides/druid/concepts/druidversion.md b/docs/guides/druid/concepts/druidversion.md new file mode 100644 index 0000000000..6db904e7d2 --- /dev/null +++ b/docs/guides/druid/concepts/druidversion.md @@ -0,0 +1,118 @@ +--- +title: AppBinding CRD +menu: + docs_{{ .version }}: + identifier: guides-druid-concepts-druidversion + name: AppBinding + parent: guides-druid-concepts + weight: 30 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# KafkaVersion + +## What is KafkaVersion + +`KafkaVersion` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration to specify the docker images to be used for [Kafka](https://kafka.apache.org) database deployed with KubeDB in a Kubernetes native way. + +When you install KubeDB, a `KafkaVersion` custom resource will be created automatically for every supported Kafka versions. You have to specify the name of `KafkaVersion` CR in `spec.version` field of [Kafka](/docs/guides/kafka/concepts/kafka.md) crd. Then, KubeDB will use the docker images specified in the `KafkaVersion` CR to create your expected database. + +Using a separate CRD for specifying respective docker images, and pod security policy names allow us to modify the images, and policies independent of KubeDB operator.This will also allow the users to use a custom image for the database. + +## KafkaVersion Spec + +As with all other Kubernetes objects, a KafkaVersion needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. + +```yaml +apiVersion: catalog.kubedb.com/v1alpha1 +kind: KafkaVersion +metadata: + annotations: + meta.helm.sh/release-name: kubedb + meta.helm.sh/release-namespace: kubedb + creationTimestamp: "2024-05-02T06:38:17Z" + generation: 1 + labels: + app.kubernetes.io/instance: kubedb + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: kubedb-catalog + app.kubernetes.io/version: v2024.4.27 + helm.sh/chart: kubedb-catalog-v2024.4.27 + name: 3.6.1 + resourceVersion: "2881" + uid: 778fb80c-b37a-4ac6-bfaa-fec83e5f49c7 +spec: + connectCluster: + image: ghcr.io/appscode-images/kafka-connect-cluster:3.6.1 + cruiseControl: + image: ghcr.io/appscode-images/kafka-cruise-control:3.6.1 + db: + image: ghcr.io/appscode-images/kafka-kraft:3.6.1 + podSecurityPolicies: + databasePolicyName: kafka-db + securityContext: + runAsUser: 1001 + version: 3.6.1 +``` + +### metadata.name + +`metadata.name` is a required field that specifies the name of the `KafkaVersion` CR. You have to specify this name in `spec.version` field of [Kafka](/docs/guides/kafka/concepts/kafka.md) CR. + +We follow this convention for naming KafkaVersion CR: + +- Name format: `{Original Kafka image version}-{modification tag}` + +We use official Apache Kafka release tar files to build docker images for supporting Kafka versions and re-tag the image with v1, v2 etc. modification tag when there's any. An image with higher modification tag will have more features than the images with lower modification tag. Hence, it is recommended to use KafkaVersion CR with the highest modification tag to enjoy the latest features. + +### spec.version + +`spec.version` is a required field that specifies the original version of Kafka database that has been used to build the docker image specified in `spec.db.image` field. + +### spec.deprecated + +`spec.deprecated` is an optional field that specifies whether the docker images specified here is supported by the current KubeDB operator. + +The default value of this field is `false`. If `spec.deprecated` is set to `true`, KubeDB operator will skip processing this CRD object and will add a event to the CRD object specifying that the DB version is deprecated. + +### spec.db.image + +`spec.db.image` is a required field that specifies the docker image which will be used to create PetSet by KubeDB operator to create expected Kafka database. + +### spec.cruiseControl.image + +`spec.cruiseControl.image` is a required field that specifies the docker image which will be used to create Deployment by KubeDB operator to create expected Kafka Cruise Control. + +### spec.connectCluster.image + +`spec.connectCluster.image` is a required field that specifies the docker image which will be used to create PetSet by KubeDB operator to create expected Kafka Connect Cluster. + + +### spec.podSecurityPolicies.databasePolicyName + +`spec.podSecurityPolicies.databasePolicyName` is a required field that specifies the name of the pod security policy required to get the database server pod(s) running. + +```bash +helm upgrade -i kubedb oci://ghcr.io/appscode-charts/kubedb \ + --namespace kubedb --create-namespace \ + --set additionalPodSecurityPolicies[0]=custom-db-policy \ + --set additionalPodSecurityPolicies[1]=custom-snapshotter-policy \ + --set-file global.license=/path/to/the/license.txt \ + --wait --burst-limit=10000 --debug +``` + +## Next Steps + +- Learn about Kafka CRD [here](/docs/guides/kafka/concepts/kafka.md). +- Deploy your first Kafka database with KubeDB by following the guide [here](/docs/guides/kafka/quickstart/kafka/index.md).