+
+## User Guide
+- [Quickstart Druid](/docs/v2024.4.27/guides/druid/quickstart/overview/) with KubeDB Operator.
+
+[//]: # (- Druid Clustering supported by KubeDB)
+
+[//]: # ( - [Topology Clustering](/docs/guides/druid/clustering/topology-cluster/index.md))
+
+[//]: # (- Use [kubedb cli](/docs/guides/druid/cli/cli.md) to manage databases like kubectl for Kubernetes.)
+
+- Detail concepts of [Druid object](/docs/v2024.4.27/guides/druid/concepts/druid).
+
+[//]: # (- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).)
\ No newline at end of file
diff --git a/content/docs/v2024.4.27/guides/druid/_index.md b/content/docs/v2024.4.27/guides/druid/_index.md
new file mode 100644
index 0000000000..884a704dd2
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/druid/_index.md
@@ -0,0 +1,22 @@
+---
+title: Druid
+menu:
+ docs_v2024.4.27:
+ identifier: dr-druid-guides
+ name: Druid
+ parent: guides
+ weight: 10
+menu_name: docs_v2024.4.27
+info:
+ autoscaler: v0.30.0
+ cli: v0.45.0
+ dashboard: v0.21.0
+ installer: v2024.4.27
+ ops-manager: v0.32.0
+ provisioner: v0.45.0
+ schema-manager: v0.21.0
+ ui-server: v0.21.0
+ version: v2024.4.27
+ webhook-server: v0.21.0
+---
+
diff --git a/content/docs/v2024.4.27/guides/druid/concepts/_index.md b/content/docs/v2024.4.27/guides/druid/concepts/_index.md
new file mode 100755
index 0000000000..fca22a59e9
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/druid/concepts/_index.md
@@ -0,0 +1,22 @@
+---
+title: Druid Concepts
+menu:
+ docs_v2024.4.27:
+ identifier: dr-concepts-druid
+ name: Concepts
+ parent: dr-druid-guides
+ weight: 20
+menu_name: docs_v2024.4.27
+info:
+ autoscaler: v0.30.0
+ cli: v0.45.0
+ dashboard: v0.21.0
+ installer: v2024.4.27
+ ops-manager: v0.32.0
+ provisioner: v0.45.0
+ schema-manager: v0.21.0
+ ui-server: v0.21.0
+ version: v2024.4.27
+ webhook-server: v0.21.0
+---
+
diff --git a/content/docs/v2024.4.27/guides/druid/concepts/appbinding.md b/content/docs/v2024.4.27/guides/druid/concepts/appbinding.md
new file mode 100644
index 0000000000..3962cc1c37
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/druid/concepts/appbinding.md
@@ -0,0 +1,250 @@
+---
+title: AppBinding CRD
+menu:
+ docs_v2024.4.27:
+ identifier: dr-appbinding-concepts
+ name: AppBinding
+ parent: dr-concepts-druid
+ weight: 20
+menu_name: docs_v2024.4.27
+section_menu_id: guides
+info:
+ autoscaler: v0.30.0
+ cli: v0.45.0
+ dashboard: v0.21.0
+ installer: v2024.4.27
+ ops-manager: v0.32.0
+ provisioner: v0.45.0
+ schema-manager: v0.21.0
+ ui-server: v0.21.0
+ version: v2024.4.27
+ webhook-server: v0.21.0
+---
+
+> New to KubeDB? Please start [here](/docs/v2024.4.27/README).
+
+# AppBinding
+
+## What is AppBinding
+
+An `AppBinding` is a Kubernetes `CustomResourceDefinition`(CRD) which points to an application using either its URL (usually for a non-Kubernetes resident service instance) or a Kubernetes service object (if self-hosted in a Kubernetes cluster), some optional parameters and a credential secret. To learn more about AppBinding and the problems it solves, please read this blog post: [The case for AppBinding](https://appscode.com/blog/post/the-case-for-appbinding).
+
+If you deploy a database using [KubeDB](https://kubedb.com/docs/0.11.0/concepts/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database.
+
+KubeDB uses [Stash](https://appscode.com/products/stash/) to perform backup/recovery of databases. Stash needs to know how to connect with a target database and the credentials necessary to access it. This is done via an `AppBinding`.
+
+[//]: # (## AppBinding CRD Specification)
+
+[//]: # ()
+[//]: # (Like any official Kubernetes resource, an `AppBinding` has `TypeMeta`, `ObjectMeta` and `Spec` sections. However, unlike other Kubernetes resources, it does not have a `Status` section.)
+
+[//]: # ()
+[//]: # (An `AppBinding` object created by `KubeDB` for PostgreSQL database is shown below,)
+
+[//]: # ()
+[//]: # (```yaml)
+
+[//]: # (apiVersion: appcatalog.appscode.com/v1alpha1)
+
+[//]: # (kind: AppBinding)
+
+[//]: # (metadata:)
+
+[//]: # ( name: quick-postgres)
+
+[//]: # ( namespace: demo)
+
+[//]: # ( labels:)
+
+[//]: # ( app.kubernetes.io/component: database)
+
+[//]: # ( app.kubernetes.io/instance: quick-postgres)
+
+[//]: # ( app.kubernetes.io/managed-by: kubedb.com)
+
+[//]: # ( app.kubernetes.io/name: postgres)
+
+[//]: # ( app.kubernetes.io/version: "10.2"-v2)
+
+[//]: # ( app.kubernetes.io/name: postgreses.kubedb.com)
+
+[//]: # ( app.kubernetes.io/instance: quick-postgres)
+
+[//]: # (spec:)
+
+[//]: # ( type: kubedb.com/postgres)
+
+[//]: # ( secret:)
+
+[//]: # ( name: quick-postgres-auth)
+
+[//]: # ( clientConfig:)
+
+[//]: # ( service:)
+
+[//]: # ( name: quick-postgres)
+
+[//]: # ( path: /)
+
+[//]: # ( port: 5432)
+
+[//]: # ( query: sslmode=disable)
+
+[//]: # ( scheme: postgresql)
+
+[//]: # ( secretTransforms:)
+
+[//]: # ( - renameKey:)
+
+[//]: # ( from: POSTGRES_USER)
+
+[//]: # ( to: username)
+
+[//]: # ( - renameKey:)
+
+[//]: # ( from: POSTGRES_PASSWORD)
+
+[//]: # ( to: password)
+
+[//]: # ( version: "10.2")
+
+[//]: # (```)
+
+[//]: # ()
+[//]: # (Here, we are going to describe the sections of an `AppBinding` crd.)
+
+[//]: # ()
+[//]: # (### AppBinding `Spec`)
+
+[//]: # ()
+[//]: # (An `AppBinding` object has the following fields in the `spec` section:)
+
+[//]: # ()
+[//]: # (#### spec.type)
+
+[//]: # ()
+[//]: # (`spec.type` is an optional field that indicates the type of the app that this `AppBinding` is pointing to. Stash uses this field to resolve the values of `TARGET_APP_TYPE`, `TARGET_APP_GROUP` and `TARGET_APP_RESOURCE` variables of [BackupBlueprint](https://appscode.com/products/stash/latest/concepts/crds/backupblueprint/) object.)
+
+[//]: # ()
+[//]: # (This field follows the following format: `/`. The above AppBinding is pointing to a `postgres` resource under `kubedb.com` group.)
+
+[//]: # ()
+[//]: # (Here, the variables are parsed as follows:)
+
+[//]: # ()
+[//]: # (| Variable | Usage |)
+
+[//]: # (| --------------------- | --------------------------------------------------------------------------------------------------------------------------------- |)
+
+[//]: # (| `TARGET_APP_GROUP` | Represents the application group where the respective app belongs (i.e: `kubedb.com`). |)
+
+[//]: # (| `TARGET_APP_RESOURCE` | Represents the resource under that application group that this appbinding represents (i.e: `postgres`). |)
+
+[//]: # (| `TARGET_APP_TYPE` | Represents the complete type of the application. It's simply `TARGET_APP_GROUP/TARGET_APP_RESOURCE` (i.e: `kubedb.com/postgres`). |)
+
+[//]: # ()
+[//]: # (#### spec.secret)
+
+[//]: # ()
+[//]: # (`spec.secret` specifies the name of the secret which contains the credentials that are required to access the database. This secret must be in the same namespace as the `AppBinding`.)
+
+[//]: # ()
+[//]: # (This secret must contain the following keys:)
+
+[//]: # ()
+[//]: # (PostgreSQL :)
+
+[//]: # ()
+[//]: # (| Key | Usage |)
+
+[//]: # (| ------------------- | --------------------------------------------------- |)
+
+[//]: # (| `POSTGRES_USER` | Username of the target database. |)
+
+[//]: # (| `POSTGRES_PASSWORD` | Password for the user specified by `POSTGRES_USER`. |)
+
+[//]: # ()
+[//]: # (MySQL :)
+
+[//]: # ()
+[//]: # (| Key | Usage |)
+
+[//]: # (| ---------- | ---------------------------------------------- |)
+
+[//]: # (| `username` | Username of the target database. |)
+
+[//]: # (| `password` | Password for the user specified by `username`. |)
+
+[//]: # ()
+[//]: # (MongoDB :)
+
+[//]: # ()
+[//]: # (| Key | Usage |)
+
+[//]: # (| ---------- | ---------------------------------------------- |)
+
+[//]: # (| `username` | Username of the target database. |)
+
+[//]: # (| `password` | Password for the user specified by `username`. |)
+
+[//]: # ()
+[//]: # (Elasticsearch:)
+
+[//]: # ()
+[//]: # (| Key | Usage |)
+
+[//]: # (| ---------------- | ----------------------- |)
+
+[//]: # (| `ADMIN_USERNAME` | Admin username |)
+
+[//]: # (| `ADMIN_PASSWORD` | Password for admin user |)
+
+[//]: # ()
+[//]: # (#### spec.clientConfig)
+
+[//]: # ()
+[//]: # (`spec.clientConfig` defines how to communicate with the target database. You can use either an URL or a Kubernetes service to connect with the database. You don't have to specify both of them.)
+
+[//]: # ()
+[//]: # (You can configure following fields in `spec.clientConfig` section:)
+
+[//]: # ()
+[//]: # (- **spec.clientConfig.url**)
+
+[//]: # ()
+[//]: # ( `spec.clientConfig.url` gives the location of the database, in standard URL form (i.e. `[scheme://]host:port/[path]`). This is particularly useful when the target database is running outside of the Kubernetes cluster. If your database is running inside the cluster, use `spec.clientConfig.service` section instead.)
+
+[//]: # ()
+[//]: # ( > Note that, attempting to use a user or basic auth (e.g. `user:password@host:port`) is not allowed. Stash will insert them automatically from the respective secret. Fragments ("#...") and query parameters ("?...") are not allowed either.)
+
+[//]: # ()
+[//]: # (- **spec.clientConfig.service**)
+
+[//]: # ()
+[//]: # ( If you are running the database inside the Kubernetes cluster, you can use Kubernetes service to connect with the database. You have to specify the following fields in `spec.clientConfig.service` section if you manually create an `AppBinding` object.)
+
+[//]: # ()
+[//]: # ( - **name :** `name` indicates the name of the service that connects with the target database.)
+
+[//]: # ( - **scheme :** `scheme` specifies the scheme (i.e. http, https) to use to connect with the database.)
+
+[//]: # ( - **port :** `port` specifies the port where the target database is running.)
+
+[//]: # ()
+[//]: # (- **spec.clientConfig.insecureSkipTLSVerify**)
+
+[//]: # ()
+[//]: # ( `spec.clientConfig.insecureSkipTLSVerify` is used to disable TLS certificate verification while connecting with the database. We strongly discourage to disable TLS verification during backup. You should provide the respective CA bundle through `spec.clientConfig.caBundle` field instead.)
+
+[//]: # ()
+[//]: # (- **spec.clientConfig.caBundle**)
+
+[//]: # ()
+[//]: # ( `spec.clientConfig.caBundle` is a PEM encoded CA bundle which will be used to validate the serving certificate of the database.)
+
+[//]: # (## Next Steps)
+
+[//]: # ()
+[//]: # (- Learn how to use KubeDB to manage various databases [here](/docs/guides/README.md).)
+
+[//]: # (- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).)
diff --git a/content/docs/v2024.4.27/guides/druid/concepts/catalog.md b/content/docs/v2024.4.27/guides/druid/concepts/catalog.md
new file mode 100644
index 0000000000..1badb21164
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/druid/concepts/catalog.md
@@ -0,0 +1,122 @@
+---
+title: DruidVersion CRD
+menu:
+ docs_v2024.4.27:
+ identifier: dr-catalog-concepts
+ name: DruidVersion
+ parent: dr-concepts-druid
+ weight: 15
+menu_name: docs_v2024.4.27
+section_menu_id: guides
+info:
+ autoscaler: v0.30.0
+ cli: v0.45.0
+ dashboard: v0.21.0
+ installer: v2024.4.27
+ ops-manager: v0.32.0
+ provisioner: v0.45.0
+ schema-manager: v0.21.0
+ ui-server: v0.21.0
+ version: v2024.4.27
+ webhook-server: v0.21.0
+---
+
+> New to KubeDB? Please start [here](/docs/v2024.4.27/README).
+
+# DruidVersion
+
+[//]: # (## What is DruidVersion)
+
+[//]: # ()
+[//]: # (`DruidVersion` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration to specify the docker images to be used for [PgBouncer](https://pgbouncer.github.io/) server deployed with KubeDB in a Kubernetes native way.)
+
+[//]: # ()
+[//]: # (When you install KubeDB, a `DruidVersion` custom resource will be created automatically for every supported PgBouncer release versions. You have to specify the name of `DruidVersion` crd in `spec.version` field of [PgBouncer](/docs/guides/pgbouncer/concepts/pgbouncer.md) crd. Then, KubeDB will use the docker images specified in the `DruidVersion` crd to create your expected PgBouncer instance.)
+
+[//]: # ()
+[//]: # (Using a separate crd for specifying respective docker image names allow us to modify the images independent of KubeDB operator. This will also allow the users to use a custom PgBouncer image for their server. For more details about how to use custom image with PgBouncer in KubeDB, please visit [here](/docs/guides/pgbouncer/custom-versions/setup.md).)
+
+[//]: # (## DruidVersion Specification)
+
+[//]: # ()
+[//]: # (As with all other Kubernetes objects, a DruidVersion needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section.)
+
+[//]: # ()
+[//]: # (```yaml)
+
+[//]: # (apiVersion: catalog.kubedb.com/v1alpha1)
+
+[//]: # (kind: DruidVersion)
+
+[//]: # (metadata:)
+
+[//]: # ( name: "1.17.0")
+
+[//]: # ( labels:)
+
+[//]: # ( app: kubedb)
+
+[//]: # (spec:)
+
+[//]: # ( deprecated: false)
+
+[//]: # ( version: "1.17.0")
+
+[//]: # ( pgBouncer:)
+
+[//]: # ( image: "${KUBEDB_CATALOG_REGISTRY}/pgbouncer:1.17.0")
+
+[//]: # ( exporter:)
+
+[//]: # ( image: "${KUBEDB_CATALOG_REGISTRY}/pgbouncer_exporter:v0.1.1")
+
+[//]: # (```)
+
+[//]: # ()
+[//]: # (### metadata.name)
+
+[//]: # ()
+[//]: # (`metadata.name` is a required field that specifies the name of the `DruidVersion` crd. You have to specify this name in `spec.version` field of [PgBouncer](/docs/guides/pgbouncer/concepts/pgbouncer.md) crd.)
+
+[//]: # ()
+[//]: # (We follow this convention for naming DruidVersion crd:)
+
+[//]: # ()
+[//]: # (- Name format: `{Original pgbouncer image version}-{modification tag}`)
+
+[//]: # ()
+[//]: # (We plan to modify original PgBouncer docker images to support additional features. Re-tagging the image with v1, v2 etc. modification tag helps separating newer iterations from the older ones. An image with higher modification tag will have more features than the images with lower modification tag. Hence, it is recommended to use DruidVersion crd with highest modification tag to take advantage of the latest features.)
+
+[//]: # ()
+[//]: # (### spec.version)
+
+[//]: # ()
+[//]: # (`spec.version` is a required field that specifies the original version of PgBouncer that has been used to build the docker image specified in `spec.server.image` field.)
+
+[//]: # ()
+[//]: # (### spec.deprecated)
+
+[//]: # ()
+[//]: # (`spec.deprecated` is an optional field that specifies whether the docker images specified here is supported by the current KubeDB operator.)
+
+[//]: # ()
+[//]: # (The default value of this field is `false`. If `spec.deprecated` is set `true`, KubeDB operator will not create the server and other respective resources for this version.)
+
+[//]: # ()
+[//]: # (### spec.pgBouncer.image)
+
+[//]: # ()
+[//]: # (`spec.pgBouncer.image` is a required field that specifies the docker image which will be used to create Statefulset by KubeDB operator to create expected PgBouncer server.)
+
+[//]: # ()
+[//]: # (### spec.exporter.image)
+
+[//]: # ()
+[//]: # (`spec.exporter.image` is a required field that specifies the image which will be used to export Prometheus metrics.)
+
+[//]: # (## Next Steps)
+
+[//]: # ()
+[//]: # (- Learn about PgBouncer crd [here](/docs/guides/pgbouncer/concepts/catalog.md).)
+
+[//]: # (- Deploy your first PgBouncer server with KubeDB by following the guide [here](/docs/guides/pgbouncer/quickstart/quickstart.md).)
\ No newline at end of file
diff --git a/content/docs/v2024.4.27/guides/druid/concepts/druid.md b/content/docs/v2024.4.27/guides/druid/concepts/druid.md
new file mode 100644
index 0000000000..e9b15ba4cf
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/druid/concepts/druid.md
@@ -0,0 +1,379 @@
+---
+title: Druid CRD
+menu:
+ docs_v2024.4.27:
+ identifier: dr-druid-concepts
+ name: Druid
+ parent: dr-concepts-druid
+ weight: 10
+menu_name: docs_v2024.4.27
+section_menu_id: guides
+info:
+ autoscaler: v0.30.0
+ cli: v0.45.0
+ dashboard: v0.21.0
+ installer: v2024.4.27
+ ops-manager: v0.32.0
+ provisioner: v0.45.0
+ schema-manager: v0.21.0
+ ui-server: v0.21.0
+ version: v2024.4.27
+ webhook-server: v0.21.0
+---
+
+> New to KubeDB? Please start [here](/docs/v2024.4.27/README).
+
+
+# Druid
+
+[//]: # ()
+[//]: # (## What is PgBouncer)
+
+[//]: # ()
+[//]: # (`PgBouncer` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [PgBouncer](https://www.pgbouncer.github.io/) in a Kubernetes native way. You only need to describe the desired configurations in a `PgBouncer` object, and the KubeDB operator will create Kubernetes resources in the desired state for you.)
+
+[//]: # ()
+[//]: # (## PgBouncer Spec)
+
+[//]: # ()
+[//]: # (Like any official Kubernetes resource, a `PgBouncer` object has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections.)
+
+[//]: # ()
+[//]: # (Below is an example PgBouncer object.)
+
+[//]: # ()
+[//]: # (```yaml)
+
+[//]: # (apiVersion: kubedb.com/v1alpha2)
+
+[//]: # (kind: PgBouncer)
+
+[//]: # (metadata:)
+
+[//]: # ( name: pgbouncer-server)
+
+[//]: # ( namespace: demo)
+
+[//]: # (spec:)
+
+[//]: # ( version: "1.18.0")
+
+[//]: # ( replicas: 2)
+
+[//]: # ( databases:)
+
+[//]: # ( - alias: "postgres")
+
+[//]: # ( databaseName: "postgres")
+
+[//]: # ( databaseRef:)
+
+[//]: # ( name: "quick-postgres")
+
+[//]: # ( namespace: demo)
+
+[//]: # ( connectionPool:)
+
+[//]: # ( maxClientConnections: 20)
+
+[//]: # ( reservePoolSize: 5)
+
+[//]: # ( monitor:)
+
+[//]: # ( agent: prometheus.io/operator)
+
+[//]: # ( prometheus:)
+
+[//]: # ( serviceMonitor:)
+
+[//]: # ( labels:)
+
+[//]: # ( release: prometheus)
+
+[//]: # ( interval: 10s)
+
+[//]: # (```)
+
+[//]: # ()
+[//]: # (### spec.version)
+
+[//]: # ()
+[//]: # (`spec.version` is a required field that specifies the name of the [PgBouncerVersion](/docs/guides/pgbouncer/concepts/catalog.md) crd where the docker images are specified. Currently, when you install KubeDB, it creates the following `PgBouncerVersion` resources,)
+
+[//]: # ()
+[//]: # (- `1.18.0`)
+
+[//]: # ()
+[//]: # (### spec.replicas)
+
+[//]: # ()
+[//]: # (`spec.replicas` specifies the total number of available pgbouncer server nodes for each crd. KubeDB uses `PodDisruptionBudget` to ensure that majority of the replicas are available during [voluntary disruptions](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#voluntary-and-involuntary-disruptions).)
+
+[//]: # ()
+[//]: # (### spec.databases)
+
+[//]: # ()
+[//]: # (`spec.databases` specifies an array of postgres databases that pgbouncer should add to its connection pool. It contains three `required` fields and two `optional` fields for each database connection.)
+
+[//]: # ()
+[//]: # (- `spec.databases.alias`: specifies an alias for the target database located in a postgres server specified by an appbinding.)
+
+[//]: # (- `spec.databases.databaseName`: specifies the name of the target database.)
+
+[//]: # (- `spec.databases.databaseRef`: specifies the name and namespace of the AppBinding that contains the path to a PostgreSQL server where the target database can be found.)
+
+[//]: # ()
+[//]: # (ConnectionPool is used to configure pgbouncer connection-pool. All the fields here are accompanied by default values and can be left unspecified if no customisation is required by the user.)
+
+[//]: # ()
+[//]: # (- `spec.connectionPool.port`: specifies the port on which pgbouncer should listen to connect with clients. The default is 5432.)
+
+[//]: # ()
+[//]: # (- `spec.connectionPool.poolMode`: specifies the value of pool_mode. Specifies when a server connection can be reused by other clients.)
+
+[//]: # ()
+[//]: # ( - session)
+
+[//]: # ()
+[//]: # ( Server is released back to pool after client disconnects. Default.)
+
+[//]: # ()
+[//]: # ( - transaction)
+
+[//]: # ()
+[//]: # ( Server is released back to pool after transaction finishes.)
+
+[//]: # ()
+[//]: # ( - statement)
+
+[//]: # ()
+[//]: # ( Server is released back to pool after query finishes. Long transactions spanning multiple statements are disallowed in this mode.)
+
+[//]: # ()
+[//]: # (- `spec.connectionPool.maxClientConnections`: specifies the value of max_client_conn. When increased then the file descriptor limits should also be increased. Note that actual number of file descriptors used is more than max_client_conn. Theoretical maximum used is:)
+
+[//]: # ()
+[//]: # ( ```bash)
+
+[//]: # ( max_client_conn + (max pool_size * total databases * total users))
+
+[//]: # ( ```)
+
+[//]: # ()
+[//]: # ( if each user connects under its own username to server. If a database user is specified in connect string (all users connect under same username), the theoretical maximum is:)
+
+[//]: # ()
+[//]: # ( ```bash)
+
+[//]: # ( max_client_conn + (max pool_size * total databases))
+
+[//]: # ( ```)
+
+[//]: # ()
+[//]: # ( The theoretical maximum should be never reached, unless somebody deliberately crafts special load for it. Still, it means you should set the number of file descriptors to a safely high number.)
+
+[//]: # ()
+[//]: # ( Search for `ulimit` in your favorite shell man page. Note: `ulimit` does not apply in a Windows environment.)
+
+[//]: # ()
+[//]: # ( Default: 100)
+
+[//]: # ()
+[//]: # (- `spec.connectionPool.defaultPoolSize`: specifies the value of default_pool_size. Used to determine how many server connections to allow per user/database pair. Can be overridden in the per-database configuration.)
+
+[//]: # ()
+[//]: # ( Default: 20)
+
+[//]: # ()
+[//]: # (- `spec.connectionPool.minPoolSize`: specifies the value of min_pool_size. PgBouncer adds more server connections to pool if below this number. Improves behavior when usual load comes suddenly back after period of total inactivity.)
+
+[//]: # ()
+[//]: # ( Default: 0 (disabled))
+
+[//]: # ()
+[//]: # (- `spec.connectionPool.reservePoolSize`: specifies the value of reserve_pool_size. Used to determine how many additional connections to allow to a pool. 0 disables.)
+
+[//]: # ()
+[//]: # ( Default: 0 (disabled))
+
+[//]: # ()
+[//]: # (- `spec.connectionPool.reservePoolTimeout`: specifies the value of reserve_pool_timeout. If a client has not been serviced in this many seconds, pgbouncer enables use of additional connections from reserve pool. 0 disables.)
+
+[//]: # ()
+[//]: # ( Default: 5.0)
+
+[//]: # ()
+[//]: # (- `spec.connectionPool.maxDbConnections`: specifies the value of max_db_connections. PgBouncer does not allow more than this many connections per-database (regardless of pool - i.e. user). It should be noted that when you hit the limit, closing a client connection to one pool will not immediately allow a server connection to be established for another pool, because the server connection for the first pool is still open. Once the server connection closes (due to idle timeout), a new server connection will immediately be opened for the waiting pool.)
+
+[//]: # ()
+[//]: # ( Default: unlimited)
+
+[//]: # ()
+[//]: # (- `spec.connectionPool.maxUserConnections`: specifies the value of max_user_connections. PgBouncer does not allow more than this many connections per-user (regardless of pool - i.e. user). It should be noted that when you hit the limit, closing a client connection to one pool will not immediately allow a server connection to be established for another pool, because the server connection for the first pool is still open. Once the server connection closes (due to idle timeout), a new server connection will immediately be opened for the waiting pool.)
+
+[//]: # ( Default: unlimited)
+
+[//]: # ()
+[//]: # (- `spec.connectionPool.statsPeriod`: sets how often the averages shown in various `SHOW` commands are updated and how often aggregated statistics are written to the log.)
+
+[//]: # ( Default: 60)
+
+[//]: # ()
+[//]: # (- `spec.connectionPool.authType`: specifies how to authenticate users. PgBouncer supports several authentication methods including pam, md5, scram-sha-256, trust , or any. However hba, and cert are not supported.)
+
+[//]: # ()
+[//]: # (- `spec.connectionPool.IgnoreStartupParameters`: specifies comma-separated startup parameters that pgbouncer knows are handled by admin and it can ignore them.)
+
+[//]: # ()
+[//]: # (### spec.monitor)
+
+[//]: # ()
+[//]: # (PgBouncer managed by KubeDB can be monitored with builtin-Prometheus and Prometheus operator out-of-the-box. To learn more,)
+
+[//]: # ()
+[//]: # (- [Monitor PgBouncer with builtin Prometheus](/docs/guides/pgbouncer/monitoring/using-builtin-prometheus.md))
+
+[//]: # (- [Monitor PgBouncer with Prometheus operator](/docs/guides/pgbouncer/monitoring/using-prometheus-operator.md))
+
+[//]: # ()
+[//]: # (### spec.podTemplate)
+
+[//]: # ()
+[//]: # (KubeDB allows providing a template for pgbouncer pods through `spec.podTemplate`. KubeDB operator will pass the information provided in `spec.podTemplate` to the StatefulSet created for PgBouncer server)
+
+[//]: # ()
+[//]: # (KubeDB accept following fields to set in `spec.podTemplate:`)
+
+[//]: # ()
+[//]: # (- metadata)
+
+[//]: # ( - annotations (pod's annotation))
+
+[//]: # (- controller)
+
+[//]: # ( - annotations (statefulset's annotation))
+
+[//]: # (- spec:)
+
+[//]: # ( - env)
+
+[//]: # ( - resources)
+
+[//]: # ( - initContainers)
+
+[//]: # ( - imagePullSecrets)
+
+[//]: # ( - affinity)
+
+[//]: # ( - tolerations)
+
+[//]: # ( - priorityClassName)
+
+[//]: # ( - priority)
+
+[//]: # ( - lifecycle)
+
+[//]: # ()
+[//]: # (Usage of some fields in `spec.podTemplate` is described below,)
+
+[//]: # ()
+[//]: # (#### spec.podTemplate.spec.env)
+
+[//]: # ()
+[//]: # (`spec.podTemplate.spec.env` is an optional field that specifies the environment variables to pass to the PgBouncer docker image. To know about supported environment variables, please visit [here](https://hub.docker.com/kubedb/pgbouncer/).)
+
+[//]: # ()
+[//]: # (Also, note that KubeDB does not allow updates to the environment variables as updating them does not have any effect once the server is created. If you try to update environment variables, KubeDB operator will reject the request with following error,)
+
+[//]: # ()
+[//]: # (```ini)
+
+[//]: # (Error from server (BadRequest): error when applying patch:)
+
+[//]: # (...)
+
+[//]: # (for: "./pgbouncer.yaml": admission webhook "pgbouncer.validators.kubedb.com" denied the request: precondition failed for:)
+
+[//]: # (...)
+
+[//]: # (At least one of the following was changed:)
+
+[//]: # ( apiVersion)
+
+[//]: # ( kind)
+
+[//]: # ( name)
+
+[//]: # ( namespace)
+
+[//]: # ( spec.podTemplate.spec.nodeSelector)
+
+[//]: # (```)
+
+[//]: # ()
+[//]: # (#### spec.podTemplate.spec.imagePullSecrets)
+
+[//]: # ()
+[//]: # (`spec.podTemplate.spec.imagePullSecrets` is an optional field that points to secrets to be used for pulling docker image if you are using a private docker registry. For more details on how to use private docker registry, please visit [here](/docs/guides/pgbouncer/private-registry/using-private-registry.md).)
+
+[//]: # ()
+[//]: # (#### spec.podTemplate.spec.nodeSelector)
+
+[//]: # ()
+[//]: # (`spec.podTemplate.spec.nodeSelector` is an optional field that specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). To learn more, see [here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) .)
+
+[//]: # ()
+[//]: # (#### spec.podTemplate.spec.resources)
+
+[//]: # ()
+[//]: # (`spec.podTemplate.spec.resources` is an optional field. This can be used to request compute resources required by the database pods. To learn more, visit [here](http://kubernetes.io/docs/user-guide/compute-resources/).)
+
+[//]: # ()
+[//]: # (### spec.serviceTemplate)
+
+[//]: # ()
+[//]: # (KubeDB creates a service for each PgBouncer instance. The service has the same name as the `pgbouncer.name` and points to pgbouncer pods.)
+
+[//]: # ()
+[//]: # (You can provide template for this service using `spec.serviceTemplate`. This will allow you to set the type and other properties of the service. If `spec.serviceTemplate` is not provided, KubeDB will create a service of type `ClusterIP` with minimal settings.)
+
+[//]: # ()
+[//]: # (KubeDB allows the following fields to set in `spec.serviceTemplate`:)
+
+[//]: # ()
+[//]: # (- metadata:)
+
+[//]: # ( - annotations)
+
+[//]: # (- spec:)
+
+[//]: # ( - type)
+
+[//]: # ( - ports)
+
+[//]: # ( - clusterIP)
+
+[//]: # ( - externalIPs)
+
+[//]: # ( - loadBalancerIP)
+
+[//]: # ( - loadBalancerSourceRanges)
+
+[//]: # ( - externalTrafficPolicy)
+
+[//]: # ( - healthCheckNodePort)
+
+[//]: # ( - sessionAffinityConfig)
+
+[//]: # ()
+[//]: # (See [here](https://github.com/kmodules/offshoot-api/blob/kubernetes-1.16.3/api/v1/types.go#L163) to understand these fields in detail.)
+
+[//]: # ()
+[//]: # (## Next Steps)
+
+[//]: # ()
+[//]: # (- Learn how to use KubeDB to run a PostgreSQL database [here](/docs/guides/postgres/README.md).)
+
+[//]: # (- Learn how to how to get started with PgBouncer [here](/docs/guides/pgbouncer/quickstart/quickstart.md).)
+
+[//]: # (- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).)
diff --git a/content/docs/v2024.4.27/guides/druid/quickstart/_index.md b/content/docs/v2024.4.27/guides/druid/quickstart/_index.md
new file mode 100644
index 0000000000..981490c385
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/druid/quickstart/_index.md
@@ -0,0 +1,22 @@
+---
+title: Druid Quickstart
+menu:
+ docs_v2024.4.27:
+ identifier: dr-quickstart-druid
+ name: Quickstart
+ parent: dr-druid-guides
+ weight: 15
+menu_name: docs_v2024.4.27
+info:
+ autoscaler: v0.30.0
+ cli: v0.45.0
+ dashboard: v0.21.0
+ installer: v2024.4.27
+ ops-manager: v0.32.0
+ provisioner: v0.45.0
+ schema-manager: v0.21.0
+ ui-server: v0.21.0
+ version: v2024.4.27
+ webhook-server: v0.21.0
+---
+
diff --git a/content/docs/v2024.4.27/guides/druid/quickstart/overview/index.md b/content/docs/v2024.4.27/guides/druid/quickstart/overview/index.md
new file mode 100644
index 0000000000..1885754dfb
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/druid/quickstart/overview/index.md
@@ -0,0 +1,785 @@
+---
+title: Druid Quickstart
+menu:
+ docs_v2024.4.27:
+ identifier: dr-quickstart-quickstart
+ name: Overview
+ parent: dr-quickstart-druid
+ weight: 10
+menu_name: docs_v2024.4.27
+section_menu_id: guides
+info:
+ autoscaler: v0.30.0
+ cli: v0.45.0
+ dashboard: v0.21.0
+ installer: v2024.4.27
+ ops-manager: v0.32.0
+ provisioner: v0.45.0
+ schema-manager: v0.21.0
+ ui-server: v0.21.0
+ version: v2024.4.27
+ webhook-server: v0.21.0
+---
+
+> New to KubeDB? Please start [here](/docs/v2024.4.27/README).
+
+# Druid QuickStart
+
+This tutorial will show you how to use KubeDB to run an [Apache Druid](https://druid.apache.org//).
+
+
+
+
+
+## Before You Begin
+
+At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.4.27/setup/README) and make sure install with helm command including the flags `--set global.featureGates.Druid=true` to ensure **Druid CRD** and `--set global.featureGates.ZooKeeper=true` to ensure **ZooKeeper CRD** as Druid depends on ZooKeeper for external dependency.
+
+To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial.
+
+```bash
+$ kubectl create namespace demo
+namespace/demo created
+
+$ kubectl get namespace
+NAME STATUS AGE
+demo Active 9s
+```
+
+> Note: YAML files used in this tutorial are stored in [guides/druid/quickstart/overview/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/druid/quickstart/overview/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs).
+
+> We have designed this tutorial to demonstrate a production setup of KubeDB managed Apache Druid. If you just want to try out KubeDB, you can bypass some safety features following the tips [here](/docs/v2024.4.27/guides/druid/quickstart/overview/#tips-for-testing).
+
+## Find Available StorageClass
+
+We will have to provide `StorageClass` in Druid CRD specification. Check available `StorageClass` in your cluster using the following command,
+
+```bash
+$ kubectl get storageclass
+NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
+standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 14h
+```
+
+Here, we have `standard` StorageClass in our cluster from [Local Path Provisioner](https://github.com/rancher/local-path-provisioner).
+
+## Find Available DruidVersion
+
+When you install the KubeDB operator, it registers a CRD named [DruidVersion](/docs/v2024.4.27/guides/druid/concepts/catalog). The installation process comes with a set of tested DruidVersion objects. Let's check available DruidVersions by,
+
+```bash
+$ kubectl get druidversion
+NAME VERSION DB_IMAGE DEPRECATED AGE
+28.0.1 28.0.1 ghcr.io/appscode-images/druid:28.0.1 4h47m
+```
+
+Notice the `DEPRECATED` column. Here, `true` means that this DruidVersion is deprecated for the current KubeDB version. KubeDB will not work for deprecated DruidVersion. You can also use the short from `drversion` to check available DruidVersions.
+
+In this tutorial, we will use `28.0.1` DruidVersion CR to create a Druid cluster.
+
+## Get External Dependencies Ready
+
+### Metadata Storage
+
+Druid uses the metadata store to house various metadata about the system, but not to store the actual data. The metadata store retains all metadata essential for a Druid cluster to work. **Apache Derby** is the default metadata store for Druid, however, it is not suitable for production. **MySQL** and **PostgreSQL** are more production suitable metadata stores.
+
+Luckily, **PostgreSQL** and **MySQL** both are readily available in KubeDB as CRD and can easily be deployed using the [MySQL-Guide](/docs/v2024.4.27/guides/mysql/quickstart/) and [PostgreSQL-Guide](/docs/v2024.4.27/guides/postgres/quickstart/quickstart).
+
+In this tutorial, we will use a **MySQL** named `mysql-demo` in the `demo` namespace and create a database named `druid` inside it using [initialization script](/docs/v2024.4.27/guides/mysql/initialization/#prepare-initialization-scripts).
+
+Let’s create a ConfigMap with initialization script first and then create the `mysql-demo` database,
+
+```bash
+$ kubectl create configmap -n demo my-init-script \
+--from-literal=init.sql="$(curl -fsSL https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/druid/quickstart/mysql-init-script.sql)"
+configmap/my-init-script created
+
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/druid/quickstart/mysql-demo.yaml
+mysql.kubedb.com/mysql-demo created
+```
+
+### ZooKeeper
+
+Apache Druid uses [Apache ZooKeeper](https://zookeeper.apache.org/) (ZK) for management of current cluster state i.e. internal service discovery, coordination, and leader election.
+
+Fortunately, KubeDB also has support for **ZooKeeper** and can easily be deployed using the guide [here](/docs/v2024.4.27/guides/zookeeper/quickstart/quickstart)
+
+In this tutorial, we will create a ZooKeeper named `zk-demo` in the `demo` namespace.
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/druid/quickstart/zk-demo.yaml
+zookeeper.kubedb.com/zk-demo created
+```
+
+### Deep Storage
+
+Another external dependency of Druid is deep storage where the segments are stored. It is a storage mechanism that Apache Druid does not provide. **Amazon S3**, **Google Cloud Storage**, or **Azure Blob Storage**, **S3-compatible storage** (like **Minio**), or **HDFS** are generally convenient options for deep storage.
+
+In this tutorial, we will run a `minio-server` as deep storage in our local `kind` cluster using `minio-operator` and create a bucket named `druid` in it, which the deployed druid database will use.
+
+```bash
+$ helm upgrade --install --namespace "minio-operator" --create-namespace "minio-operator" minio/operator --set operator.replicaCount=1
+
+$ helm upgrade --install --namespace "demo" --create-namespace druid-minio minio/tenant \
+--set tenant.pools[0].servers=1 \
+--set tenant.pools[0].volumesPerServer=1 \
+--set tenant.pools[0].size=1Gi \
+--set tenant.certificate.requestAutoCert=false \
+--set tenant.buckets[0].name="druid"
+
+```
+
+Now we need to create a `Secret` named `deep-storage-config`. It contains the necessary connection information using which the druid database will connect to the deep storage.
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: deep-storage-config
+ namespace: demo
+stringData:
+ druid.storage.type: "s3"
+ druid.storage.bucket: "druid"
+ druid.storage.baseKey: "druid/segments"
+ druid.s3.accessKey: "minio"
+ druid.s3.secretKey: "minio123"
+ druid.s3.protocol: "http"
+ druid.s3.endpoint.signingRegion: "us-east-1"
+ druid.s3.endpoint.url: "http://myminio-hl.demo.svc.cluster.local:9000/"
+```
+
+Let’s create the `deep-storage-config` Secret shown above:
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/druid/quickstart/deep-storage-config.yaml
+secret/deep-storage-config created
+```
+
+You can also use options like **Amazon S3**, **Google Cloud Storage**, **Azure Blob Storage** or **HDFS** and create a connection information `Secret` like this, and you are good to go.
+
+## Create a Druid Cluster
+
+The KubeDB operator implements a Druid CRD to define the specification of Druid.
+
+The Druid instance used for this tutorial:
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Druid
+metadata:
+ name: druid-quickstart
+ namespace: demo
+spec:
+ version: 28.0.1
+ deepStorage:
+ type: s3
+ configSecret:
+ name: deep-storage-config
+ metadataStorage:
+ name: mysql-demo
+ namespace: demo
+ createTables: true
+ zookeeperRef:
+ name: zk-demo
+ namespace: demo
+ topology:
+ coordinators:
+ replicas: 1
+ brokers:
+ replicas: 1
+ historicals:
+ replicas: 1
+ storage:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ storageClassName: standard
+ middleManagers:
+ replicas: 1
+ storage:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ storageClassName: standard
+ routers:
+ replicas: 1
+ storageType: Durable
+ terminationPolicy: Delete
+ serviceTemplates:
+ - alias: primary
+ spec:
+ type: LoadBalancer
+ ports:
+ - name: routers
+ port: 8888
+```
+
+Here,
+
+- `spec.version` - is the name of the DruidVersion CR. Here, a Druid of version `28.0.1` will be created.
+- `spec.storageType` - specifies the type of storage that will be used for Kafka. It can be `Durable` or `Ephemeral`. The default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create the Druid using `EmptyDir` volume. In this case, you don't have to specify `spec.storage` field. This is useful for testing purposes.
+- `spec.storage` specifies the StorageClass of PVC dynamically allocated to store data for this Druid instance. This storage spec will be passed to the PetSet created by the KubeDB operator to run Druid pods. You can specify any StorageClass available in your cluster with appropriate resource requests. If you don't specify `spec.storageType: Ephemeral`, then this field is required.
+- `spec.terminationPolicy` specifies what KubeDB should do when a user try to delete Druid CR. Termination policy `Delete` will delete the database pods and PVC when the Druid CR is deleted.
+
+> Note: `spec.storage` section is used to create PVC for database pod. It will create PVC with storage size specified in the `storage.resources.requests` field. Don't specify `limits` here. PVC does not get resized automatically.
+
+Let's create the Druid CR that is shown above:
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/druid/quickstart/druid-quickstart.yaml
+druid.kubedb.com/druid-quickstart created
+```
+
+The Druid's `STATUS` will go from `Provisioning` to `Ready` state within few minutes. Once the `STATUS` is `Ready`, you are ready to use the newly provisioned Druid cluster.
+
+```bash
+$ kubectl get druid -n demo -w
+NAME TYPE VERSION STATUS AGE
+druid-quickstart kubedb.com/v1alpha2 28.0.1 Provisioning 17s
+druid-quickstart kubedb.com/v1alpha2 28.0.1 Provisioning 28s
+.
+.
+druid-quickstart kubedb.com/v1alpha2 28.0.1 Ready 82s
+```
+
+Describe the Druid object to observe the progress if something goes wrong or the status is not changing for a long period of time:
+
+```bash
+$ kubectl describe druid -n demo druid-quickstart
+Name: druid-quickstart
+Namespace: demo
+Labels:
+Annotations:
+API Version: kubedb.com/v1alpha2
+Kind: Druid
+Metadata:
+ Creation Timestamp: 2024-05-02T10:00:12Z
+ Finalizers:
+ kubedb.com/druid
+ Generation: 1
+ Managed Fields:
+ API Version: kubedb.com/v1alpha2
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:metadata:
+ f:annotations:
+ .:
+ f:kubectl.kubernetes.io/last-applied-configuration:
+ f:spec:
+ .:
+ f:deepStorage:
+ .:
+ f:configSecret:
+ f:type:
+ f:healthChecker:
+ .:
+ f:failureThreshold:
+ f:periodSeconds:
+ f:timeoutSeconds:
+ f:metadataStorage:
+ .:
+ f:createTables:
+ f:name:
+ f:namespace:
+ f:storageType:
+ f:topology:
+ .:
+ f:brokers:
+ .:
+ f:podPlacementPolicy:
+ f:replicas:
+ f:coordinators:
+ .:
+ f:podPlacementPolicy:
+ f:replicas:
+ f:historicals:
+ .:
+ f:podPlacementPolicy:
+ f:replicas:
+ f:storage:
+ .:
+ f:accessModes:
+ f:resources:
+ .:
+ f:requests:
+ .:
+ f:storage:
+ f:storageClassName:
+ f:middleManagers:
+ .:
+ f:podPlacementPolicy:
+ f:replicas:
+ f:storage:
+ .:
+ f:accessModes:
+ f:resources:
+ .:
+ f:requests:
+ .:
+ f:storage:
+ f:storageClassName:
+ f:routers:
+ .:
+ f:podPlacementPolicy:
+ f:replicas:
+ f:version:
+ f:zookeeperRef:
+ .:
+ f:name:
+ f:namespace:
+ Manager: kubectl-client-side-apply
+ Operation: Update
+ Time: 2024-05-02T10:00:12Z
+ API Version: kubedb.com/v1alpha2
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:metadata:
+ f:finalizers:
+ .:
+ v:"kubedb.com/druid":
+ Manager: kubedb-provisioner
+ Operation: Update
+ Time: 2024-05-02T10:00:12Z
+ API Version: kubedb.com/v1alpha2
+ Fields Type: FieldsV1
+ fieldsV1:
+ f:status:
+ .:
+ f:conditions:
+ f:phase:
+ Manager: kubedb-provisioner
+ Operation: Update
+ Subresource: status
+ Time: 2024-05-02T10:01:34Z
+ Resource Version: 68607
+ UID: 7759adad-4e49-4f44-80ae-fc04cc474813
+Spec:
+ Auth Secret:
+ Name: druid-quickstart-admin-cred
+ Deep Storage:
+ Config Secret:
+ Name: deep-storage-config
+ Type: s3
+ Disable Security: false
+ Health Checker:
+ Failure Threshold: 3
+ Period Seconds: 30
+ Timeout Seconds: 10
+ Metadata Storage:
+ Create Tables: true
+ Name: mysql-demo
+ Namespace: druid
+ Pod Template:
+ Controller:
+ Metadata:
+ Spec:
+ Storage Type: Ephemeral
+ Termination Policy: Delete
+ Topology:
+ Brokers:
+ Pod Placement Policy:
+ Name: default
+ Pod Template:
+ Controller:
+ Metadata:
+ Spec:
+ Containers:
+ Name: druid
+ Resources:
+ Limits:
+ Memory: 1Gi
+ Requests:
+ Cpu: 500m
+ Memory: 1Gi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Init Containers:
+ Name: init-druid
+ Resources:
+ Limits:
+ Memory: 512Mi
+ Requests:
+ Cpu: 200m
+ Memory: 512Mi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Security Context:
+ Fs Group: 1000
+ Replicas: 1
+ Coordinators:
+ Pod Placement Policy:
+ Name: default
+ Pod Template:
+ Controller:
+ Metadata:
+ Spec:
+ Containers:
+ Name: druid
+ Resources:
+ Limits:
+ Memory: 1Gi
+ Requests:
+ Cpu: 500m
+ Memory: 1Gi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Init Containers:
+ Name: init-druid
+ Resources:
+ Limits:
+ Memory: 512Mi
+ Requests:
+ Cpu: 200m
+ Memory: 512Mi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Security Context:
+ Fs Group: 1000
+ Replicas: 2
+ Historicals:
+ Pod Placement Policy:
+ Name: default
+ Pod Template:
+ Controller:
+ Metadata:
+ Spec:
+ Containers:
+ Name: druid
+ Resources:
+ Limits:
+ Memory: 1Gi
+ Requests:
+ Cpu: 500m
+ Memory: 1Gi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Init Containers:
+ Name: init-druid
+ Resources:
+ Limits:
+ Memory: 512Mi
+ Requests:
+ Cpu: 200m
+ Memory: 512Mi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Security Context:
+ Fs Group: 1000
+ Replicas: 1
+ Storage:
+ Access Modes:
+ ReadWriteOnce
+ Resources:
+ Requests:
+ Storage: 1Gi
+ Storage Class Name: standard
+ Middle Managers:
+ Pod Placement Policy:
+ Name: default
+ Pod Template:
+ Controller:
+ Metadata:
+ Spec:
+ Containers:
+ Name: druid
+ Resources:
+ Limits:
+ Memory: 2560Mi
+ Requests:
+ Cpu: 500m
+ Memory: 2560Mi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Init Containers:
+ Name: init-druid
+ Resources:
+ Limits:
+ Memory: 512Mi
+ Requests:
+ Cpu: 200m
+ Memory: 512Mi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Security Context:
+ Fs Group: 1000
+ Replicas: 1
+ Storage:
+ Access Modes:
+ ReadWriteOnce
+ Resources:
+ Requests:
+ Storage: 1Gi
+ Storage Class Name: standard
+ Routers:
+ Pod Placement Policy:
+ Name: default
+ Pod Template:
+ Controller:
+ Metadata:
+ Spec:
+ Containers:
+ Name: druid
+ Resources:
+ Limits:
+ Memory: 1Gi
+ Requests:
+ Cpu: 500m
+ Memory: 1Gi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Init Containers:
+ Name: init-druid
+ Resources:
+ Limits:
+ Memory: 512Mi
+ Requests:
+ Cpu: 200m
+ Memory: 512Mi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Security Context:
+ Fs Group: 1000
+ Replicas: 1
+ Version: 28.0.1
+ Zookeeper Ref:
+ Name: zk-demo
+ Namespace: druid
+Status:
+ Conditions:
+ Last Transition Time: 2024-05-02T10:00:12Z
+ Message: The KubeDB operator has started the provisioning of Druid: demo/druid-quickstart
+ Observed Generation: 1
+ Reason: DatabaseProvisioningStartedSuccessfully
+ Status: True
+ Type: ProvisioningStarted
+ Last Transition Time: 2024-05-02T10:00:40Z
+ Message: All desired replicas are ready.
+ Observed Generation: 1
+ Reason: AllReplicasReady
+ Status: True
+ Type: ReplicaReady
+ Last Transition Time: 2024-05-02T10:01:11Z
+ Message: The Druid: demo/druid-quickstart is accepting client requests and nodes formed a cluster
+ Observed Generation: 1
+ Reason: DatabaseAcceptingConnectionRequest
+ Status: True
+ Type: AcceptingConnection
+ Last Transition Time: 2024-05-02T10:01:34Z
+ Message: The Druid: demo/druid-quickstart is ready.
+ Observed Generation: 1
+ Reason: ReadinessCheckSucceeded
+ Status: True
+ Type: Ready
+ Last Transition Time: 2024-05-02T10:01:34Z
+ Message: The Druid: demo/druid-quickstart is successfully provisioned.
+ Observed Generation: 1
+ Reason: DatabaseSuccessfullyProvisioned
+ Status: True
+ Type: Provisioned
+ Phase: Ready
+Events:
+
+```
+
+### KubeDB Operator Generated Resources
+
+On deployment of a Druid CR, the operator creates the following resources:
+
+```bash
+$ kubectl get all,secret,petset -n demo -l 'app.kubernetes.io/instance=druid-quickstart'
+NAME READY STATUS RESTARTS AGE
+pod/druid-quickstart-brokers-0 1/1 Running 0 2m4s
+pod/druid-quickstart-coordinators-0 1/1 Running 0 2m10s
+pod/druid-quickstart-historicals-0 1/1 Running 0 2m8s
+pod/druid-quickstart-middlemanagers-0 1/1 Running 0 2m6s
+pod/druid-quickstart-routers-0 1/1 Running 0 2m1s
+
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+service/druid-quickstart-brokers ClusterIP 10.96.28.252 8082/TCP 2m13s
+service/druid-quickstart-coordinators ClusterIP 10.96.52.186 8081/TCP 2m13s
+service/druid-quickstart-pods ClusterIP None 8081/TCP,8090/TCP,8083/TCP,8091/TCP,8082/TCP,8888/TCP 2m13s
+service/druid-quickstart-routers LoadBalancer 10.96.134.202 10.86.51.181 8888:32751/TCP 2m13s
+
+NAME TYPE VERSION AGE
+appbinding.appcatalog.appscode.com/druid-quickstart kubedb.com/druid 28.0.1 2m1s
+
+NAME TYPE DATA AGE
+secret/druid-quickstart-admin-cred kubernetes.io/basic-auth 2 2m13s
+
+NAME AGE
+petset.apps.k8s.appscode.com/druid-quickstart-brokers 2m4s
+petset.apps.k8s.appscode.com/druid-quickstart-coordinators 2m10s
+petset.apps.k8s.appscode.com/druid-quickstart-historicals 2m8s
+petset.apps.k8s.appscode.com/druid-quickstart-middlemanagers 2m6s
+petset.apps.k8s.appscode.com/druid-quickstart-routers 2m1s
+
+```
+
+- `PetSet` - In topology mode, the operator may create 4 to 6 petSets (depending on the topology you provide as overlords and routers are optional) with name `{Druid-Name}-{Sufix}`.
+- `Services` - For topology mode, a headless service with name `{Druid-Name}-{pods}`. Other than that, 2 to 4 more services (depending on the specified topology) with name `{Druid-Name}-{Sufix}` can be created.
+ - `{Druid-Name}-{brokers}` - The primary service which is used to connect the brokers with external clients.
+ - `{Druid-Name}-{coordinators}` - The primary service which is used to connect the coordinators with external clients.
+ - `{Druid-Name}-{overlords}` - The primary service is only created if `spec.topology.overlords` is provided. In the same way, it is used to connect the overlords with external clients.
+ - `{Druid-Name}-{routers}` - Like the previous one, this primary service is only created if `spec.topology.routers` is provided. It is used to connect the routers with external clients.
+- `AppBinding` - an [AppBinding](/docs/v2024.4.27/guides/kafka/concepts/appbinding) which hold to connect information for the Druid. Like other resources, it is named after the Druid instance.
+- `Secrets` - A secret is generated for each Druid cluster.
+ - `{Druid-Name}-{username}-cred` - the auth secrets which hold the `username` and `password` for the Druid users. Operator generates credentials for `admin` user and creates a secret for authentication.
+
+## Connect with Druid Database
+We will use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to connect with our routers of the Druid database. Then we will use `curl` to send `HTTP` requests to check cluster health to verify that our Druid database is working well. It is also possible to use `External-IP` to access druid nodes if you make `service` type of that node as `LoadBalancer`.
+
+### Check the Service Health
+
+Let's port-forward the port `8888` to local machine:
+
+```bash
+$ kubectl port-forward -n demo svc/druid-quickstart-routers 8888
+Forwarding from 127.0.0.1:8888 -> 8888
+Forwarding from [::1]:8888 -> 8888
+```
+
+Now, the Druid cluster is accessible at `localhost:8888`. Let's check the [Service Health](https://druid.apache.org/docs/latest/api-reference/service-status-api/#get-service-health) of Routers of the Druid database.
+
+```bash
+$ curl "http://localhost:8888/status/health"
+true
+```
+From the retrieved health information above, we can see that our Druid cluster’s status is `true`, indicating that the service can receive API calls and is healthy. In the same way it possible to check the health of other druid nodes by port-forwarding the appropriate services.
+
+### Access the web console
+
+We can also access the [web console](https://druid.apache.org/docs/latest/operations/web-console) of Druid database from any browser by port-forwarding the routers in the same way shown in the aforementioned step or directly using the `External-IP` if the router service type is `LoadBalancer`.
+
+Now hit the `http://localhost:8888` from any browser, and you will be prompted to provide the credential of the druid database. By following the steps discussed below, you can get the credential generated by the KubeDB operator for your Druid database.
+
+**Connection information:**
+
+- Username:
+
+ ```bash
+ $ kubectl get secret -n demo druid-quickstart-admin-cred -o jsonpath='{.data.username}' | base64 -d
+ admin
+ ```
+
+- Password:
+
+ ```bash
+ $ kubectl get secret -n demo druid-quickstart-admin-cred -o jsonpath='{.data.password}' | base64 -d
+ LzJtVRX5E8MorFaf
+ ```
+
+After providing the credentials correctly, you should be able to access the web console like shown below.
+
+
+
+
+
+You can use this web console for loading data, managing datasources and tasks, and viewing server status and segment information. You can also run SQL and native Druid queries in the console.
+
+## Cleaning up
+
+To clean up the Kubernetes resources created by this tutorial, run:
+
+```bash
+$ kubectl patch -n demo druid druid-quickstart -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge"
+kafka.kubedb.com/druid-quickstart patched
+
+$ kubectl delete dr druid-quickstart -n demo
+druid.kubedb.com "druid-quickstart" deleted
+
+$ kubectl delete namespace demo
+namespace "demo" deleted
+```
+
+## Tips for Testing
+
+If you are just testing some basic functionalities, you might want to avoid additional hassles due to some safety features that are great for the production environment. You can follow these tips to avoid them.
+
+1. **Use `storageType: Ephemeral`**. Databases are precious. You might not want to lose your data in your production environment if the database pod fails. So, we recommend to use `spec.storageType: Durable` and provide storage spec in `spec.storage` section. For testing purposes, you can just use `spec.storageType: Ephemeral`. KubeDB will use [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) for storage. You will not require to provide `spec.storage` section.
+2. **Use `terminationPolicy: WipeOut`**. It is nice to be able to resume the database from the previous one. So, we preserve all your `PVCs` and auth `Secrets`. If you don't want to resume the database, you can just use `spec.terminationPolicy: WipeOut`. It will clean up every resource that was created with the Druid CR. For more details, please visit [here](/docs/v2024.4.27/guides/kafka/concepts/kafka#specterminationpolicy).
+
+## Next Steps
+
+[//]: # (- Druid Clustering supported by KubeDB)
+
+[//]: # ( - [Combined Clustering](/docs/guides/kafka/clustering/combined-cluster/index.md))
+
+[//]: # ( - [Topology Clustering](/docs/guides/kafka/clustering/topology-cluster/index.md))
+- Use [kubedb cli](/docs/v2024.4.27/guides/kafka/cli/cli) to manage databases like kubectl for Kubernetes.
+
+[//]: # (- Detail concepts of [Kafka object](/docs/guides/kafka/concepts/kafka.md).)
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.4.27/CONTRIBUTING).
diff --git a/content/docs/v2024.4.27/guides/kafka/README.md b/content/docs/v2024.4.27/guides/kafka/README.md
index 3b7be622aa..a3e7dd1911 100644
--- a/content/docs/v2024.4.27/guides/kafka/README.md
+++ b/content/docs/v2024.4.27/guides/kafka/README.md
@@ -28,44 +28,79 @@ info:
## Supported Kafka Features
+| Features | Kafka | ConnectCluster |
+|------------------------------------------------------------------------------------|----------|----------------|
+| Clustering - Combined (shared controller and broker nodes) | ✓ | - |
+| Clustering - Topology (dedicated controllers and broker nodes) | ✓ | - |
+| Custom Configuration | ✓ | ✓ |
+| Automated Version Update | ✓ | ✗ |
+| Automatic Vertical Scaling | ✓ | ✗ |
+| Automated Horizontal Scaling | ✓ | ✗ |
+| Automated Volume Expansion | ✓ | - |
+| Custom Docker Image | ✓ | ✓ |
+| Authentication & Authorization | ✓ | ✓ |
+| TLS: Add, Remove, Update, Rotate ( [Cert Manager](https://cert-manager.io/docs/) ) | ✓ | ✓ |
+| Reconfigurable Health Checker | ✓ | ✓ |
+| Externally manageable Auth Secret | ✓ | ✓ |
+| Pre-Configured JMX Exporter for Metrics | ✓ | ✓ |
+| Monitoring with Prometheus & Grafana | ✓ | ✓ |
+| Autoscaling (vertically, volume) | ✓ | ✗ |
+| Custom Volume | ✓ | ✓ |
+| Persistent Volume | ✓ | - |
+| Connectors | - | ✓ |
-| Features | Community | Enterprise |
-|----------------------------------------------------------------|:---------:|:----------:|
-| Clustering - Combined (shared controller and broker nodes) | ✓ | ✓ |
-| Clustering - Topology (dedicated controllers and broker nodes) | ✓ | ✓ |
-| Custom Docker Image | ✓ | ✓ |
-| Authentication & Authorization | ✓ | ✓ |
-| Persistent Volume | ✓ | ✓ |
-| Custom Volume | ✓ | ✓ |
-| TLS: using ( [Cert Manager](https://cert-manager.io/docs/) ) | ✗ | ✓ |
-| Reconfigurable Health Checker | ✓ | ✓ |
-| Externally manageable Auth Secret | ✓ | ✓ |
-| Monitoring with Prometheus & Grafana | ✓ | ✓ |
+## Lifecycle of Kafka Object
+
+
+
+
+
+
+
+## Lifecycle of ConnectCluster Object
+
+
+
+
## Supported Kafka Versions
KubeDB supports The following Kafka versions. Supported version are applicable for Kraft mode or Zookeeper-less releases:
-- `3.3.0`
- `3.3.2`
-- `3.4.0`
+- `3.4.1`
+- `3.5.1`
+- `3.5.2`
+- `3.6.0`
+- `3.6.1`
-> The listed KafkaVersions are tested and provided as a part of the installation process (ie. catalog chart), but you are open to create your own [KafkaVersion](/docs/v2024.4.27/guides/kafka/concepts/catalog) object with your custom Kafka image.
+> The listed KafkaVersions are tested and provided as a part of the installation process (ie. catalog chart), but you are open to create your own [KafkaVersion](/docs/v2024.4.27/guides/kafka/concepts/kafkaversion) object with your custom Kafka image.
-## Lifecycle of Kafka Object
+## Supported KafkaConnector Versions
-
+| Connector Plugin | Type | Version | Connector Class |
+|----------------------|--------|-------------|------------------------------------------------------------|
+| mongodb-1.11.0 | Source | 1.11.0 | com.mongodb.kafka.connect.MongoSourceConnector |
+| mongodb-1.11.0 | Sink | 1.11.0 | com.mongodb.kafka.connect.MongoSinkConnector |
+| mysql-2.4.2.final | Source | 2.4.2.Final | io.debezium.connector.mysql.MySqlConnector |
+| postgres-2.4.2.final | Source | 2.4.2.Final | io.debezium.connector.postgresql.PostgresConnector |
+| jdbc-2.6.1.final | Sink | 2.6.1.Final | io.debezium.connector.jdbc.JdbcSinkConnector |
+| s3-2.15.0 | Sink | 2.15.0 | io.aiven.kafka.connect.s3.AivenKafkaConnectS3SinkConnector |
+| gcs-0.13.0 | Sink | 0.13.0 | io.aiven.kafka.connect.gcs.GcsSinkConnector |
-
-
-
## User Guide
-- [Quickstart Kafka](/docs/v2024.4.27/guides/kafka/quickstart/overview/) with KubeDB Operator.
+- [Quickstart Kafka](/docs/v2024.4.27/guides/kafka/quickstart/overview/kafka/) with KubeDB Operator.
+- [Quickstart ConnectCluster](/docs/v2024.4.27/guides/kafka/quickstart/overview/connectcluster/) with KubeDB Operator.
- Kafka Clustering supported by KubeDB
- [Combined Clustering](/docs/v2024.4.27/guides/kafka/clustering/combined-cluster/)
- [Topology Clustering](/docs/v2024.4.27/guides/kafka/clustering/topology-cluster/)
+- Monitor your Kafka database with KubeDB using [out-of-the-box Prometheus and Grafana](/docs/v2024.4.27/guides/kafka/monitoring/using-prometheus-operator).
- Use [kubedb cli](/docs/v2024.4.27/guides/kafka/cli/cli) to manage databases like kubectl for Kubernetes.
- Detail concepts of [Kafka object](/docs/v2024.4.27/guides/kafka/concepts/kafka).
+- Detail concepts of [ConnectCluster object](/docs/v2024.4.27/guides/kafka/concepts/connectcluster).
+- Detail concepts of [Connector object](/docs/v2024.4.27/guides/kafka/concepts/connector).
+- Detail concepts of [KafkaVersion object](/docs/v2024.4.27/guides/kafka/concepts/kafkaversion).
+- Detail concepts of [KafkaConnectorVersion object](/docs/v2024.4.27/guides/kafka/concepts/kafkaconnectorversion).
- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.4.27/CONTRIBUTING).
\ No newline at end of file
diff --git a/content/docs/v2024.4.27/guides/kafka/cli/cli.md b/content/docs/v2024.4.27/guides/kafka/cli/cli.md
index 1307dcf6d3..b98bd13514 100644
--- a/content/docs/v2024.4.27/guides/kafka/cli/cli.md
+++ b/content/docs/v2024.4.27/guides/kafka/cli/cli.md
@@ -34,21 +34,21 @@ KubeDB comes with its own cli. It is called `kubedb` cli. `kubedb` can be used t
`kubectl create` creates a database CRD object in `default` namespace by default. Following command will create a Kafka object as specified in `kafka.yaml`.
```bash
-$ kubectl create -f kafka.yaml
+$ kubectl create -f druid-quickstart.yaml
kafka.kubedb.com/kafka created
```
You can provide namespace as a flag `--namespace`. Provided namespace should match with namespace specified in input file.
```bash
-$ kubectl create -f kafka.yaml --namespace=kube-system
+$ kubectl create -f druid-quickstart.yaml --namespace=kube-system
kafka.kubedb.com/kafka created
```
`kubectl create` command also considers `stdin` as input.
```bash
-cat kafka.yaml | kubectl create -f -
+cat druid-quickstart.yaml | kubectl create -f -
```
### How to List Objects
@@ -58,7 +58,7 @@ cat kafka.yaml | kubectl create -f -
```bash
$ kubectl get kafka
NAME TYPE VERSION STATUS AGE
-kafka kubedb.com/v1alpha2 3.4.0 Ready 36m
+kafka kubedb.com/v1alpha2 3.6.1 Ready 36m
```
You can also use short-form (`kf`) for kafka CR.
@@ -66,7 +66,7 @@ You can also use short-form (`kf`) for kafka CR.
```bash
$ kubectl get kf
NAME TYPE VERSION STATUS AGE
-kafka kubedb.com/v1alpha2 3.4.0 Ready 36m
+kafka kubedb.com/v1alpha2 3.6.1 Ready 36m
```
To get YAML of an object, use `--output=yaml` or `-oyaml` flag. Use `-n` flag for referring namespace.
@@ -78,7 +78,7 @@ kind: Kafka
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
- {"apiVersion":"kubedb.com/v1alpha2","kind":"Kafka","metadata":{"annotations":{},"name":"kafka","namespace":"demo"},"spec":{"authSecret":{"name":"kafka-admin-cred"},"enableSSL":true,"healthChecker":{"failureThreshold":3,"periodSeconds":20,"timeoutSeconds":10},"keystoreCredSecret":{"name":"kafka-keystore-cred"},"storageType":"Durable","terminationPolicy":"DoNotTerminate","tls":{"certificates":[{"alias":"server","secretName":"kafka-server-cert"},{"alias":"client","secretName":"kafka-client-cert"}],"issuerRef":{"apiGroup":"cert-manager.io","kind":"Issuer","name":"kafka-ca-issuer"}},"topology":{"broker":{"replicas":3,"resources":{"limits":{"memory":"1Gi"},"requests":{"cpu":"500m","memory":"1Gi"}},"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"suffix":"broker"},"controller":{"replicas":3,"resources":{"limits":{"memory":"1Gi"},"requests":{"cpu":"500m","memory":"1Gi"}},"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"suffix":"controller"}},"version":"3.4.0"}}
+ {"apiVersion":"kubedb.com/v1alpha2","kind":"Kafka","metadata":{"annotations":{},"name":"kafka","namespace":"demo"},"spec":{"authSecret":{"name":"kafka-admin-cred"},"enableSSL":true,"healthChecker":{"failureThreshold":3,"periodSeconds":20,"timeoutSeconds":10},"keystoreCredSecret":{"name":"kafka-keystore-cred"},"storageType":"Durable","terminationPolicy":"DoNotTerminate","tls":{"certificates":[{"alias":"server","secretName":"kafka-server-cert"},{"alias":"client","secretName":"kafka-client-cert"}],"issuerRef":{"apiGroup":"cert-manager.io","kind":"Issuer","name":"kafka-ca-issuer"}},"topology":{"broker":{"replicas":3,"resources":{"limits":{"memory":"1Gi"},"requests":{"cpu":"500m","memory":"1Gi"}},"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"suffix":"broker"},"controller":{"replicas":3,"resources":{"limits":{"memory":"1Gi"},"requests":{"cpu":"500m","memory":"1Gi"}},"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"suffix":"controller"}},"version":"3.6.1"}}
creationTimestamp: "2023-03-29T07:01:29Z"
finalizers:
- kubedb.com
@@ -147,7 +147,7 @@ spec:
storage: 1Gi
storageClassName: standard
suffix: controller
- version: 3.4.0
+ version: 3.6.1
status:
conditions:
- lastTransitionTime: "2023-03-29T07:01:29Z"
@@ -192,7 +192,7 @@ $ kubectl get kf kafka -n demo -ojson
"kind": "Kafka",
"metadata": {
"annotations": {
- "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"kubedb.com/v1alpha2\",\"kind\":\"Kafka\",\"metadata\":{\"annotations\":{},\"name\":\"kafka\",\"namespace\":\"demo\"},\"spec\":{\"authSecret\":{\"name\":\"kafka-admin-cred\"},\"enableSSL\":true,\"healthChecker\":{\"failureThreshold\":3,\"periodSeconds\":20,\"timeoutSeconds\":10},\"keystoreCredSecret\":{\"name\":\"kafka-keystore-cred\"},\"storageType\":\"Durable\",\"terminationPolicy\":\"DoNotTerminate\",\"tls\":{\"certificates\":[{\"alias\":\"server\",\"secretName\":\"kafka-server-cert\"},{\"alias\":\"client\",\"secretName\":\"kafka-client-cert\"}],\"issuerRef\":{\"apiGroup\":\"cert-manager.io\",\"kind\":\"Issuer\",\"name\":\"kafka-ca-issuer\"}},\"topology\":{\"broker\":{\"replicas\":3,\"resources\":{\"limits\":{\"memory\":\"1Gi\"},\"requests\":{\"cpu\":\"500m\",\"memory\":\"1Gi\"}},\"storage\":{\"accessModes\":[\"ReadWriteOnce\"],\"resources\":{\"requests\":{\"storage\":\"1Gi\"}},\"storageClassName\":\"standard\"},\"suffix\":\"broker\"},\"controller\":{\"replicas\":3,\"resources\":{\"limits\":{\"memory\":\"1Gi\"},\"requests\":{\"cpu\":\"500m\",\"memory\":\"1Gi\"}},\"storage\":{\"accessModes\":[\"ReadWriteOnce\"],\"resources\":{\"requests\":{\"storage\":\"1Gi\"}},\"storageClassName\":\"standard\"},\"suffix\":\"controller\"}},\"version\":\"3.4.0\"}}\n"
+ "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"kubedb.com/v1alpha2\",\"kind\":\"Kafka\",\"metadata\":{\"annotations\":{},\"name\":\"kafka\",\"namespace\":\"demo\"},\"spec\":{\"authSecret\":{\"name\":\"kafka-admin-cred\"},\"enableSSL\":true,\"healthChecker\":{\"failureThreshold\":3,\"periodSeconds\":20,\"timeoutSeconds\":10},\"keystoreCredSecret\":{\"name\":\"kafka-keystore-cred\"},\"storageType\":\"Durable\",\"terminationPolicy\":\"DoNotTerminate\",\"tls\":{\"certificates\":[{\"alias\":\"server\",\"secretName\":\"kafka-server-cert\"},{\"alias\":\"client\",\"secretName\":\"kafka-client-cert\"}],\"issuerRef\":{\"apiGroup\":\"cert-manager.io\",\"kind\":\"Issuer\",\"name\":\"kafka-ca-issuer\"}},\"topology\":{\"broker\":{\"replicas\":3,\"resources\":{\"limits\":{\"memory\":\"1Gi\"},\"requests\":{\"cpu\":\"500m\",\"memory\":\"1Gi\"}},\"storage\":{\"accessModes\":[\"ReadWriteOnce\"],\"resources\":{\"requests\":{\"storage\":\"1Gi\"}},\"storageClassName\":\"standard\"},\"suffix\":\"broker\"},\"controller\":{\"replicas\":3,\"resources\":{\"limits\":{\"memory\":\"1Gi\"},\"requests\":{\"cpu\":\"500m\",\"memory\":\"1Gi\"}},\"storage\":{\"accessModes\":[\"ReadWriteOnce\"],\"resources\":{\"requests\":{\"storage\":\"1Gi\"}},\"storageClassName\":\"standard\"},\"suffix\":\"controller\"}},\"version\":\"3.6.1\"}}\n"
},
"creationTimestamp": "2023-03-29T07:01:29Z",
"finalizers": [
@@ -293,7 +293,7 @@ $ kubectl get kf kafka -n demo -ojson
"suffix": "controller"
}
},
- "version": "3.4.0"
+ "version": "3.6.1"
},
"status": {
"conditions": [
@@ -353,15 +353,15 @@ demo pod/kafka-broker-1 1/1 Running 0 45m 10.24
demo pod/kafka-broker-2 1/1 Running 0 45m 10.244.0.57 kind-control-plane
demo pod/kafka-controller-0 1/1 Running 0 45m 10.244.0.51 kind-control-plane
demo pod/kafka-controller-1 1/1 Running 0 45m 10.244.0.55 kind-control-plane
-demo pod/kafka-controller-2 1/1 Running 3 (45m ago) 45m 10.244.0.58 kind-control-plane
+demo pod/kafka-controller-2 1/1 Running 0 45m 10.244.0.58 kind-control-plane
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
demo service/kafka-broker ClusterIP None 9092/TCP,29092/TCP 46m app.kubernetes.io/instance=kafka,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=kafkas.kubedb.com,kubedb.com/role=broker
demo service/kafka-controller ClusterIP None 9093/TCP 46m app.kubernetes.io/instance=kafka,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=kafkas.kubedb.com,kubedb.com/role=controller
NAMESPACE NAME READY AGE CONTAINERS IMAGES
-demo statefulset.apps/kafka-broker 3/3 45m kafka docker.io/kubedb/kafka-kraft:3.4.0@sha256:f059db2929e3cfe388f50e82e168a9ce94b012e413e056eda2838df48632048a
-demo statefulset.apps/kafka-controller 3/3 45m kafka docker.io/kubedb/kafka-kraft:3.4.0@sha256:f059db2929e3cfe388f50e82e168a9ce94b012e413e056eda2838df48632048a
+demo statefulset.apps/kafka-broker 3/3 45m kafka ghcr.io/appscode-images/kafka-kraft:3.6.1@sha256:e251d3c0ceee0db8400b689e42587985034852a8a6c81b5973c2844e902e6d11
+demo statefulset.apps/kafka-controller 3/3 45m kafka ghcr.io/appscode-images/kafka-kraft:3.6.1@sha256:e251d3c0ceee0db8400b689e42587985034852a8a6c81b5973c2844e902e6d11
NAMESPACE NAME TYPE VERSION AGE
demo appbinding.appcatalog.appscode.com/kafka kubedb.com/kafka 3.4.0 45m
@@ -703,14 +703,14 @@ kafka.kubedb.com "kafka" deleted
You can also use YAML files to delete objects. The following command will delete an Kafka using the type and name specified in `kafka.yaml`.
```bash
-$ kubectl delete -f kafka.yaml
+$ kubectl delete -f druid-quickstart.yaml
kafka.kubedb.com "kafka" deleted
```
`kubectl delete` command also takes input from `stdin`.
```bash
-cat kafka.yaml | kubectl delete -f -
+cat druid-quickstart.yaml | kubectl delete -f -
```
To delete database with matching labels, use `--selector` flag. The following command will delete kafka with label `app.kubernetes.io/instance=kafka`.
diff --git a/content/docs/v2024.4.27/guides/kafka/clustering/combined-cluster/index.md b/content/docs/v2024.4.27/guides/kafka/clustering/combined-cluster/index.md
index 776a4fab0c..6350098dc6 100644
--- a/content/docs/v2024.4.27/guides/kafka/clustering/combined-cluster/index.md
+++ b/content/docs/v2024.4.27/guides/kafka/clustering/combined-cluster/index.md
@@ -48,7 +48,7 @@ demo Active 9s
## Create Standalone Kafka Cluster
-Here, we are going to create a standalone (ie. `replicas: 1`) Kafka cluster in Kraft mode. For this demo, we are going to provision kafka version `3.3.2`. To learn more about Kafka CR, visit [here](/docs/v2024.4.27/guides/kafka/concepts/kafka). visit [here](/docs/v2024.4.27/guides/kafka/concepts/catalog) to learn more about KafkaVersion CR.
+Here, we are going to create a standalone (i.e. `replicas: 1`) Kafka cluster in Kraft mode. For this demo, we are going to provision kafka version `3.6.1`. To learn more about Kafka CR, visit [here](/docs/v2024.4.27/guides/kafka/concepts/kafka). visit [here](/docs/v2024.4.27/guides/kafka/concepts/kafkaversion) to learn more about KafkaVersion CR.
```yaml
apiVersion: kubedb.com/v1alpha2
@@ -58,7 +58,7 @@ metadata:
namespace: demo
spec:
replicas: 1
- version: 3.3.2
+ version: 3.6.1
storage:
accessModes:
- ReadWriteOnce
@@ -82,12 +82,12 @@ Watch the bootstrap progress:
```bash
$ kubectl get kf -n demo -w
NAME TYPE VERSION STATUS AGE
-kafka-standalone kubedb.com/v1alpha2 3.3.2 Provisioning 8s
-kafka-standalone kubedb.com/v1alpha2 3.3.2 Provisioning 14s
-kafka-standalone kubedb.com/v1alpha2 3.3.2 Provisioning 35s
-kafka-standalone kubedb.com/v1alpha2 3.3.2 Provisioning 35s
-kafka-standalone kubedb.com/v1alpha2 3.3.2 Provisioning 36s
-kafka-standalone kubedb.com/v1alpha2 3.3.2 Ready 41s
+kafka-standalone kubedb.com/v1alpha2 3.6.1 Provisioning 8s
+kafka-standalone kubedb.com/v1alpha2 3.6.1 Provisioning 14s
+kafka-standalone kubedb.com/v1alpha2 3.6.1 Provisioning 35s
+kafka-standalone kubedb.com/v1alpha2 3.6.1 Provisioning 35s
+kafka-standalone kubedb.com/v1alpha2 3.6.1 Provisioning 36s
+kafka-standalone kubedb.com/v1alpha2 3.6.1 Ready 41s
```
Hence, the cluster is ready to use.
@@ -105,7 +105,7 @@ NAME READY AGE
statefulset.apps/kafka-standalone 1/1 8m56s
NAME TYPE VERSION AGE
-appbinding.appcatalog.appscode.com/kafka-standalone kubedb.com/kafka 3.3.2 8m56s
+appbinding.appcatalog.appscode.com/kafka-standalone kubedb.com/kafka 3.6.1 8m56s
NAME TYPE DATA AGE
secret/kafka-standalone-admin-cred kubernetes.io/basic-auth 2 8m59s
@@ -127,7 +127,7 @@ metadata:
namespace: demo
spec:
replicas: 3
- version: 3.3.2
+ version: 3.6.1
storage:
accessModes:
- ReadWriteOnce
@@ -150,12 +150,12 @@ Watch the bootstrap progress:
```bash
$ kubectl get kf -n demo -w
-kafka-multinode kubedb.com/v1alpha2 3.3.2 Provisioning 9s
-kafka-multinode kubedb.com/v1alpha2 3.3.2 Provisioning 14s
-kafka-multinode kubedb.com/v1alpha2 3.3.2 Provisioning 18s
-kafka-multinode kubedb.com/v1alpha2 3.3.2 Provisioning 2m6s
-kafka-multinode kubedb.com/v1alpha2 3.3.2 Provisioning 2m8s
-kafka-multinode kubedb.com/v1alpha2 3.3.2 Ready 2m14s
+kafka-multinode kubedb.com/v1alpha2 3.6.1 Provisioning 9s
+kafka-multinode kubedb.com/v1alpha2 3.6.1 Provisioning 14s
+kafka-multinode kubedb.com/v1alpha2 3.6.1 Provisioning 18s
+kafka-multinode kubedb.com/v1alpha2 3.6.1 Provisioning 2m6s
+kafka-multinode kubedb.com/v1alpha2 3.6.1 Provisioning 2m8s
+kafka-multinode kubedb.com/v1alpha2 3.6.1 Ready 2m14s
```
Hence, the cluster is ready to use.
@@ -175,7 +175,7 @@ NAME READY AGE
statefulset.apps/kafka-multinode 3/3 6m2s
NAME TYPE VERSION AGE
-appbinding.appcatalog.appscode.com/kafka-multinode kubedb.com/kafka 3.3.2 6m2s
+appbinding.appcatalog.appscode.com/kafka-multinode kubedb.com/kafka 3.6.1 6m2s
NAME TYPE DATA AGE
secret/kafka-multinode-admin-cred kubernetes.io/basic-auth 2 6m7s
@@ -321,6 +321,6 @@ $ kubectl delete namespace demo
- Deploy [dedicated topology cluster](/docs/v2024.4.27/guides/kafka/clustering/topology-cluster/) for Apache Kafka
- Monitor your Kafka cluster with KubeDB using [`out-of-the-box` Prometheus operator](/docs/v2024.4.27/guides/kafka/monitoring/using-prometheus-operator).
- Detail concepts of [Kafka object](/docs/v2024.4.27/guides/kafka/concepts/kafka).
-- Detail concepts of [KafkaVersion object](/docs/v2024.4.27/guides/kafka/concepts/catalog).
+- Detail concepts of [KafkaVersion object](/docs/v2024.4.27/guides/kafka/concepts/kafkaversion).
- Learn to use KubeDB managed Kafka objects using [CLIs](/docs/v2024.4.27/guides/kafka/cli/cli).
- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.4.27/CONTRIBUTING).
\ No newline at end of file
diff --git a/content/docs/v2024.4.27/guides/kafka/clustering/topology-cluster/index.md b/content/docs/v2024.4.27/guides/kafka/clustering/topology-cluster/index.md
index 5e39230bcc..584a0c277e 100644
--- a/content/docs/v2024.4.27/guides/kafka/clustering/topology-cluster/index.md
+++ b/content/docs/v2024.4.27/guides/kafka/clustering/topology-cluster/index.md
@@ -1,11 +1,11 @@
---
title: Kafka Topology Cluster
-menu: null
-docs_v2024.4.27: null
-identifier: kf-topology-cluster
-name: Topology Cluster
-parent: kf-clustering
-weight: 20
+menu:
+ docs_v2024.4.27:
+ identifier: kf-topology-cluster
+ name: Topology Cluster
+ parent: kf-clustering
+ weight: 20
menu_name: docs_v2024.4.27
section_menu_id: guides
info:
@@ -91,7 +91,7 @@ issuer.cert-manager.io/kafka-ca-issuer created
### Provision TLS secure Kafka
-For this demo, we are going to provision kafka version `3.3.2` with 3 controllers and 3 brokers. To learn more about Kafka CR, visit [here](/docs/v2024.4.27/guides/kafka/concepts/kafka). visit [here](/docs/v2024.4.27/guides/kafka/concepts/catalog) to learn more about KafkaVersion CR.
+For this demo, we are going to provision kafka version `3.6.1` with 3 controllers and 3 brokers. To learn more about Kafka CR, visit [here](/docs/v2024.4.27/guides/kafka/concepts/kafka). visit [here](/docs/v2024.4.27/guides/kafka/concepts/kafkaversion) to learn more about KafkaVersion CR.
```yaml
apiVersion: kubedb.com/v1alpha2
@@ -100,7 +100,7 @@ metadata:
name: kafka-prod
namespace: demo
spec:
- version: 3.3.2
+ version: 3.6.1
enableSSL: true
tls:
issuerRef:
@@ -142,10 +142,10 @@ Watch the bootstrap progress:
```bash
$ kubectl get kf -n demo -w
NAME TYPE VERSION STATUS AGE
-kafka-prod kubedb.com/v1alpha2 3.3.2 Provisioning 6s
-kafka-prod kubedb.com/v1alpha2 3.3.2 Provisioning 14s
-kafka-prod kubedb.com/v1alpha2 3.3.2 Provisioning 50s
-kafka-prod kubedb.com/v1alpha2 3.3.2 Ready 68s
+kafka-prod kubedb.com/v1alpha2 3.6.1 Provisioning 6s
+kafka-prod kubedb.com/v1alpha2 3.6.1 Provisioning 14s
+kafka-prod kubedb.com/v1alpha2 3.6.1 Provisioning 50s
+kafka-prod kubedb.com/v1alpha2 3.6.1 Ready 68s
```
Hence, the cluster is ready to use.
@@ -170,7 +170,7 @@ statefulset.apps/kafka-prod-broker 3/3 4m10s
statefulset.apps/kafka-prod-controller 3/3 4m8s
NAME TYPE VERSION AGE
-appbinding.appcatalog.appscode.com/kafka-prod kubedb.com/kafka 3.3.2 4m8s
+appbinding.appcatalog.appscode.com/kafka-prod kubedb.com/kafka 3.6.1 4m8s
NAME TYPE DATA AGE
secret/kafka-prod-admin-cred kubernetes.io/basic-auth 2 4m14s
@@ -324,6 +324,6 @@ $ kubectl delete namespace demo
- Deploy [dedicated topology cluster](/docs/v2024.4.27/guides/kafka/clustering/topology-cluster/) for Apache Kafka
- Monitor your Kafka cluster with KubeDB using [`out-of-the-box` Prometheus operator](/docs/v2024.4.27/guides/kafka/monitoring/using-prometheus-operator).
- Detail concepts of [Kafka object](/docs/v2024.4.27/guides/kafka/concepts/kafka).
-- Detail concepts of [KafkaVersion object](/docs/v2024.4.27/guides/kafka/concepts/catalog).
+- Detail concepts of [KafkaVersion object](/docs/v2024.4.27/guides/kafka/concepts/kafkaversion).
- Learn to use KubeDB managed Kafka objects using [CLIs](/docs/v2024.4.27/guides/kafka/cli/cli).
- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.4.27/CONTRIBUTING).
\ No newline at end of file
diff --git a/content/docs/v2024.4.27/guides/kafka/concepts/appbinding.md b/content/docs/v2024.4.27/guides/kafka/concepts/appbinding.md
index ae2b1fb77e..a8f4ec48a8 100644
--- a/content/docs/v2024.4.27/guides/kafka/concepts/appbinding.md
+++ b/content/docs/v2024.4.27/guides/kafka/concepts/appbinding.md
@@ -5,7 +5,7 @@ menu:
identifier: kf-appbinding-concepts
name: AppBinding
parent: kf-concepts-kafka
- weight: 21
+ weight: 35
menu_name: docs_v2024.4.27
section_menu_id: guides
info:
@@ -45,7 +45,7 @@ kind: AppBinding
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
- {"apiVersion":"kubedb.com/v1alpha2","kind":"Kafka","metadata":{"annotations":{},"name":"kafka","namespace":"demo"},"spec":{"enableSSL":true,"monitor":{"agent":"prometheus.io/operator","prometheus":{"exporter":{"port":9091},"serviceMonitor":{"interval":"10s","labels":{"release":"prometheus"}}}},"replicas":3,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"storageType":"Durable","terminationPolicy":"WipeOut","tls":{"issuerRef":{"apiGroup":"cert-manager.io","kind":"Issuer","name":"kafka-ca-issuer"}},"version":"3.4.0"}}
+ {"apiVersion":"kubedb.com/v1alpha2","kind":"Kafka","metadata":{"annotations":{},"name":"kafka","namespace":"demo"},"spec":{"enableSSL":true,"monitor":{"agent":"prometheus.io/operator","prometheus":{"exporter":{"port":9091},"serviceMonitor":{"interval":"10s","labels":{"release":"prometheus"}}}},"replicas":3,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"storageType":"Durable","terminationPolicy":"WipeOut","tls":{"issuerRef":{"apiGroup":"cert-manager.io","kind":"Issuer","name":"kafka-ca-issuer"}},"version":"3.6.1"}}
creationTimestamp: "2023-03-27T08:04:43Z"
generation: 1
labels:
@@ -81,7 +81,7 @@ spec:
tlsSecret:
name: kafka-client-cert
type: kubedb.com/kafka
- version: 3.4.0
+ version: 3.6.1
```
Here, we are going to describe the sections of an `AppBinding` crd.
diff --git a/content/docs/v2024.4.27/guides/kafka/concepts/connectcluster.md b/content/docs/v2024.4.27/guides/kafka/concepts/connectcluster.md
new file mode 100644
index 0000000000..bccc92751f
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/kafka/concepts/connectcluster.md
@@ -0,0 +1,367 @@
+---
+title: ConnectCluster CRD
+menu:
+ docs_v2024.4.27:
+ identifier: kf-connectcluster-concepts
+ name: ConnectCluster
+ parent: kf-concepts-kafka
+ weight: 15
+menu_name: docs_v2024.4.27
+section_menu_id: guides
+info:
+ autoscaler: v0.30.0
+ cli: v0.45.0
+ dashboard: v0.21.0
+ installer: v2024.4.27
+ ops-manager: v0.32.0
+ provisioner: v0.45.0
+ schema-manager: v0.21.0
+ ui-server: v0.21.0
+ version: v2024.4.27
+ webhook-server: v0.21.0
+---
+
+> New to KubeDB? Please start [here](/docs/v2024.4.27/README).
+
+# ConnectCluster
+
+## What is ConnectCluster
+
+`ConnectCluster` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [ConnectCluster](https://kafka.apache.org/) in a Kubernetes native way. You only need to describe the desired configuration in a `ConnectCluster` object, and the KubeDB operator will create Kubernetes objects in the desired state for you.
+
+## ConnectCluster Spec
+
+As with all other Kubernetes objects, a ConnectCluster needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. Below is an example ConnectCluster object.
+
+```yaml
+apiVersion: kafka.kubedb.com/v1alpha1
+kind: ConnectCluster
+metadata:
+ name: connectcluster
+ namespace: demo
+spec:
+ version: 3.6.1
+ healthChecker:
+ failureThreshold: 3
+ periodSeconds: 20
+ timeoutSeconds: 10
+ disableSecurity: false
+ authSecret:
+ name: connectcluster-auth
+ enableSSL: true
+ keystoreCredSecret:
+ name: connectcluster-keystore-cred
+ tls:
+ issuerRef:
+ apiGroup: cert-manager.io
+ kind: Issuer
+ name: connectcluster-ca-issuer
+ certificates:
+ - alias: server
+ secretName: connectcluster-server-cert
+ - alias: client
+ secretName: connectcluster-client-cert
+ configSecret:
+ name: custom-connectcluster-config
+ replicas: 3
+ connectorPlugins:
+ - gcs-0.13.0
+ - mongodb-1.11.0
+ - mysql-2.4.2.final
+ - postgres-2.4.2.final
+ - s3-2.15.0
+ - jdbc-2.6.1.final
+ kafkaRef:
+ name: kafka
+ namespace: demo
+ podTemplate:
+ metadata:
+ annotations:
+ passMe: ToDatabasePod
+ labels:
+ thisLabel: willGoToPod
+ controller:
+ annotations:
+ passMe: ToStatefulSet
+ labels:
+ thisLabel: willGoToSts
+ monitor:
+ agent: prometheus.io/operator
+ prometheus:
+ exporter:
+ port: 56790
+ serviceMonitor:
+ labels:
+ release: prometheus
+ interval: 10s
+ terminationPolicy: WipeOut
+```
+
+### spec.version
+
+`spec.version` is a required field specifying the name of the [KafkaVersion](/docs/v2024.4.27/guides/kafka/concepts/kafkaversion) CR where the docker images are specified. Currently, when you install KubeDB, it creates the following `KafkaVersion` resources,
+
+- `3.3.2`
+- `3.4.1`
+- `3.5.1`
+- `3.5.2`
+- `3.6.0`
+- `3.6.1`
+
+### spec.replicas
+
+`spec.replicas` the number of worker nodes in ConnectCluster.
+
+KubeDB uses `PodDisruptionBudget` to ensure that majority of these replicas are available during [voluntary disruptions](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#voluntary-and-involuntary-disruptions) so that quorum is maintained.
+
+### spec.disableSecurity
+
+`spec.disableSecurity` is an optional field that specifies whether to disable all kind of security features like basic authentication and tls. The default value of this field is `false`.
+
+### spec.connectorPlugins
+
+`spec.connectorPlugins` is an optional field that specifies the list of connector plugins to be installed in the ConnectCluster worker node. The field takes a list of strings where each string represents the name of the KafkaConnectorVersion CR. To learn more about KafkaConnectorVersion CR, visit [here](/docs/v2024.4.27/guides/kafka/concepts/kafkaconnectorversion).
+```yaml
+connectorPlugins:
+ -
+ -
+```
+
+### spec.kafkaRef
+
+`spec.kafkaRef` is a required field that specifies the name and namespace of the appbinding for `Kafka` object that the `ConnectCluster` object is associated with.
+```yaml
+kafkaRef:
+ name:
+ namespace:
+```
+
+### spec.configSecret
+
+`spec.configSecret` is an optional field that specifies the name of the secret containing the custom configuration for the ConnectCluster. The secret should contain a key `config.properties` which contains the custom configuration for the ConnectCluster. The default value of this field is `nil`.
+```yaml
+configSecret:
+ name:
+```
+
+### spec.authSecret
+
+`spec.authSecret` is an optional field that points to a Secret used to hold credentials for `ConnectCluster` username and password. If not set, KubeDB operator creates a new Secret `{connectcluster-object-name}-connect-cred` for storing the username and password for each ConnectCluster object.
+
+We can use this field in 3 mode.
+
+1. Using an external secret. In this case, You need to create an auth secret first with required fields, then specify the secret name when creating the ConnectCluster object using `spec.authSecret.name` & set `spec.authSecret.externallyManaged` to true.
+```yaml
+authSecret:
+ name:
+ externallyManaged: true
+```
+
+2. Specifying the secret name only. In this case, You need to specify the secret name when creating the Kafka object using `spec.authSecret.name`. `externallyManaged` is by default false.
+```yaml
+authSecret:
+ name:
+```
+
+3. Let KubeDB do everything for you. In this case, no work for you.
+
+AuthSecret contains a `user` key and a `password` key which contains the `username` and `password` respectively for ConnectCluster user.
+
+Example:
+
+```bash
+$ kubectl create secret generic kcc-auth -n demo \
+--from-literal=username=jhon-doe \
+--from-literal=password=6q8u_2jMOW-OOZXk
+secret "kcc-auth" created
+```
+
+```yaml
+apiVersion: v1
+data:
+ password: NnE4dV8yak1PVy1PT1pYaw==
+ username: amhvbi1kb2U=
+kind: Secret
+metadata:
+ name: kcc-auth
+ namespace: demo
+type: Opaque
+```
+
+Secrets provided by users are not managed by KubeDB, and therefore, won't be modified or garbage collected by the KubeDB operator (version 0.13.0 and higher).
+
+### spec.enableSSL
+
+`spec.enableSSL` is an `optional` field that specifies whether to enable TLS to HTTP layer. The default value of this field is `false`.
+
+```yaml
+spec:
+ enableSSL: true
+```
+
+### spec.keystoreCredSecret
+
+`spec.keystoreCredSecret` is an `optional` field that specifies the name of the secret containing the keystore credentials for the ConnectCluster. The secret should contain three keys `ssl.keystore.password`, `ssl.key.password` and `ssl.keystore.password`. The default value of this field is `nil`.
+
+```yaml
+spec:
+ keystoreCredSecret:
+ name:
+```
+
+### spec.tls
+
+`spec.tls` specifies the TLS/SSL configurations. The KubeDB operator supports TLS management by using the [cert-manager](https://cert-manager.io/). Currently, the operator only supports the `PKCS#8` encoded certificates.
+
+```yaml
+spec:
+ tls:
+ issuerRef:
+ apiGroup: "cert-manager.io"
+ kind: Issuer
+ name: kcc-issuer
+ certificates:
+ - alias: server
+ privateKey:
+ encoding: PKCS8
+ secretName: kcc-client-cert
+ subject:
+ organizations:
+ - kubedb
+ - alias: http
+ privateKey:
+ encoding: PKCS8
+ secretName: kcc-server-cert
+ subject:
+ organizations:
+ - kubedb
+```
+
+The `spec.tls` contains the following fields:
+
+- `tls.issuerRef` - is an `optional` field that references to the `Issuer` or `ClusterIssuer` custom resource object of [cert-manager](https://cert-manager.io/docs/concepts/issuer/). It is used to generate the necessary certificate secrets for ConnectCluster. If the `issuerRef` is not specified, the operator creates a self-signed CA and also creates necessary certificate (valid: 365 days) secrets using that CA.
+ - `apiGroup` - is the group name of the resource that is being referenced. Currently, the only supported value is `cert-manager.io`.
+ - `kind` - is the type of resource that is being referenced. The supported values are `Issuer` and `ClusterIssuer`.
+ - `name` - is the name of the resource ( `Issuer` or `ClusterIssuer` ) that is being referenced.
+
+- `tls.certificates` - is an `optional` field that specifies a list of certificate configurations used to configure the certificates. It has the following fields:
+ - `alias` - represents the identifier of the certificate. It has the following possible value:
+ - `server` - is used for the server certificate configuration.
+ - `client` - is used for the client certificate configuration.
+
+ - `secretName` - ( `string` | `"-alias-cert"` ) - specifies the k8s secret name that holds the certificates.
+
+ - `subject` - specifies an `X.509` distinguished name (DN). It has the following configurable fields:
+ - `organizations` ( `[]string` | `nil` ) - is a list of organization names.
+ - `organizationalUnits` ( `[]string` | `nil` ) - is a list of organization unit names.
+ - `countries` ( `[]string` | `nil` ) - is a list of country names (ie. Country Codes).
+ - `localities` ( `[]string` | `nil` ) - is a list of locality names.
+ - `provinces` ( `[]string` | `nil` ) - is a list of province names.
+ - `streetAddresses` ( `[]string` | `nil` ) - is a list of street addresses.
+ - `postalCodes` ( `[]string` | `nil` ) - is a list of postal codes.
+ - `serialNumber` ( `string` | `""` ) is a serial number.
+
+ For more details, visit [here](https://golang.org/pkg/crypto/x509/pkix/#Name).
+
+ - `duration` ( `string` | `""` ) - is the period during which the certificate is valid. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as `"300m"`, `"1.5h"` or `"20h45m"`. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
+ - `renewBefore` ( `string` | `""` ) - is a specifiable time before expiration duration.
+ - `dnsNames` ( `[]string` | `nil` ) - is a list of subject alt names.
+ - `ipAddresses` ( `[]string` | `nil` ) - is a list of IP addresses.
+ - `uris` ( `[]string` | `nil` ) - is a list of URI Subject Alternative Names.
+ - `emailAddresses` ( `[]string` | `nil` ) - is a list of email Subject Alternative Names.
+
+
+
+### spec.monitor
+
+ConnectCluster managed by KubeDB can be monitored with Prometheus operator out-of-the-box. To learn more,
+- [Monitor Apache with Prometheus operator](/docs/v2024.4.27/guides/kafka/monitoring/using-prometheus-operator)
+
+### spec.podTemplate
+
+KubeDB allows providing a template for pod through `spec.podTemplate`. KubeDB operator will pass the information provided in `spec.podTemplate` to the StatefulSet created for ConnectCluster.
+
+KubeDB accept following fields to set in `spec.podTemplate:`
+
+- metadata:
+ - annotations (pod's annotation)
+ - labels (pod's labels)
+- controller:
+ - annotations (statefulset's annotation)
+ - labels (statefulset's labels)
+- spec:
+ - volumes
+ - initContainers
+ - containers
+ - imagePullSecrets
+ - nodeSelector
+ - affinity
+ - serviceAccountName
+ - schedulerName
+ - tolerations
+ - priorityClassName
+ - priority
+ - securityContext
+ - livenessProbe
+ - readinessProbe
+ - lifecycle
+
+You can check out the full list [here](https://github.com/kmodules/offshoot-api/blob/39bf8b2/api/v2/types.go#L44-L279). Uses of some field of `spec.podTemplate` is described below,
+
+#### spec.podTemplate.spec.nodeSelector
+
+`spec.podTemplate.spec.nodeSelector` is an optional field that specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). To learn more, see [here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) .
+
+#### spec.podTemplate.spec.resources
+
+`spec.podTemplate.spec.resources` is an optional field. This can be used to request compute resources required by the database pods. To learn more, visit [here](http://kubernetes.io/docs/user-guide/compute-resources/).
+
+### spec.serviceTemplates
+
+You can also provide template for the services created by KubeDB operator for Kafka cluster through `spec.serviceTemplates`. This will allow you to set the type and other properties of the services.
+
+KubeDB allows following fields to set in `spec.serviceTemplates`:
+- `alias` represents the identifier of the service. It has the following possible value:
+ - `stats` is used for the exporter service identification.
+- metadata:
+ - labels
+ - annotations
+- spec:
+ - type
+ - ports
+ - clusterIP
+ - externalIPs
+ - loadBalancerIP
+ - loadBalancerSourceRanges
+ - externalTrafficPolicy
+ - healthCheckNodePort
+ - sessionAffinityConfig
+
+See [here](https://github.com/kmodules/offshoot-api/blob/kubernetes-1.21.1/api/v1/types.go#L237) to understand these fields in detail.
+
+### spec.terminationPolicy
+
+`spec.terminationPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `ConnectCluster` crd or which resources KubeDB should keep or delete when you delete `ConnectCluster` crd. KubeDB provides following four termination policies:
+
+- Delete
+- DoNotTerminate
+- WipeOut
+
+When `terminationPolicy` is `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` feature. If admission webhook is enabled, `DoNotTerminate` prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`.
+
+## spec.healthChecker
+It defines the attributes for the health checker.
+- `spec.healthChecker.periodSeconds` specifies how often to perform the health check.
+- `spec.healthChecker.timeoutSeconds` specifies the number of seconds after which the probe times out.
+- `spec.healthChecker.failureThreshold` specifies minimum consecutive failures for the healthChecker to be considered failed.
+- `spec.healthChecker.disableWriteCheck` specifies whether to disable the writeCheck or not.
+
+Know details about KubeDB Health checking from this [blog post](https://appscode.com/blog/post/kubedb-health-checker/).
+
+## Next Steps
+
+- Learn how to use KubeDB to run a Apache Kafka Connect cluster [here](/docs/v2024.4.27/guides/kafka/README).
+- Monitor your ConnectCluster with KubeDB using [`out-of-the-box` Prometheus operator](/docs/v2024.4.27/guides/kafka/monitoring/using-prometheus-operator).
+- Detail concepts of [KafkaConnectorVersion object](/docs/v2024.4.27/guides/kafka/concepts/kafkaconnectorversion).
+- Learn to use KubeDB managed Kafka objects using [CLIs](/docs/v2024.4.27/guides/kafka/cli/cli).
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.4.27/CONTRIBUTING).
diff --git a/content/docs/v2024.4.27/guides/kafka/concepts/connector.md b/content/docs/v2024.4.27/guides/kafka/concepts/connector.md
new file mode 100644
index 0000000000..4e57aaec6e
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/kafka/concepts/connector.md
@@ -0,0 +1,88 @@
+---
+title: Connector CRD
+menu:
+ docs_v2024.4.27:
+ identifier: kf-connector-concepts
+ name: Connector
+ parent: kf-concepts-kafka
+ weight: 20
+menu_name: docs_v2024.4.27
+section_menu_id: guides
+info:
+ autoscaler: v0.30.0
+ cli: v0.45.0
+ dashboard: v0.21.0
+ installer: v2024.4.27
+ ops-manager: v0.32.0
+ provisioner: v0.45.0
+ schema-manager: v0.21.0
+ ui-server: v0.21.0
+ version: v2024.4.27
+ webhook-server: v0.21.0
+---
+
+> New to KubeDB? Please start [here](/docs/v2024.4.27/README).
+
+# Connector
+
+## What is Connector
+
+`Connector` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [Connector](https://kafka.apache.org/) in a Kubernetes native way. You only need to describe the desired configuration in a `Connector` object, and the KubeDB operator will create Kubernetes objects in the desired state for you.
+
+## Connector Spec
+
+As with all other Kubernetes objects, a Connector needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. Below is an example Connector object.
+
+```yaml
+apiVersion: kafka.kubedb.com/v1alpha1
+kind: Connector
+metadata:
+ name: mongodb-source-connector
+ namespace: demo
+spec:
+ configSecret:
+ name: mongodb-source-config
+ connectClusterRef:
+ name: connectcluster-quickstart
+ namespace: demo
+ terminationPolicy: WipeOut
+```
+
+### spec.configSecret
+
+`spec.configSecret` is a required field that specifies the name of the secret containing the configuration for the Connector. The secret should contain a key `config.properties` which contains the configuration for the Connector.
+```yaml
+spec:
+ configSecret:
+ name:
+```
+
+### spec.connectClusterRef
+
+`spec.connectClusterRef` is a required field that specifies the name and namespace of the `ConnectCluster` object that the `Connector` object is associated with. This is an appbinding reference for `ConnectCluster` object.
+```yaml
+spec:
+ connectClusterRef:
+ name:
+ namespace:
+```
+
+### spec.terminationPolicy
+
+`spec.terminationPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `Connector` CR or which resources KubeDB should keep or delete when you delete `Connector` CR. KubeDB provides following four termination policies:
+
+- Delete
+- DoNotTerminate
+- WipeOut
+
+When `terminationPolicy` is `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` feature. If admission webhook is enabled, `DoNotTerminate` prevents users from deleting the resource as long as the `spec.terminationPolicy` is set to `DoNotTerminate`.
+
+Termination policy `WipeOut` will delete the connector from the ConnectCluster when the Connector CR is deleted and `Delete` keep the connector after deleting the Connector CR.
+
+## Next Steps
+
+- Learn how to use KubeDB to run a Apache Kafka cluster [here](/docs/v2024.4.27/guides/kafka/quickstart/overview/kafka/).
+- Learn how to use KubeDB to run a Apache Kafka Connect cluster [here](/docs/v2024.4.27/guides/kafka/quickstart/overview/connectcluster/).
+- Detail concepts of [KafkaConnectorVersion object](/docs/v2024.4.27/guides/kafka/concepts/kafkaconnectorversion).
+- Learn to use KubeDB managed Kafka objects using [CLIs](/docs/v2024.4.27/guides/kafka/cli/cli).
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.4.27/CONTRIBUTING).
diff --git a/content/docs/v2024.4.27/guides/kafka/concepts/kafka.md b/content/docs/v2024.4.27/guides/kafka/concepts/kafka.md
index c2fa0371a9..cb93e8e738 100644
--- a/content/docs/v2024.4.27/guides/kafka/concepts/kafka.md
+++ b/content/docs/v2024.4.27/guides/kafka/concepts/kafka.md
@@ -42,6 +42,8 @@ metadata:
spec:
authSecret:
name: kafka-admin-cred
+ configSecret:
+ name: kafka-custom-config
enableSSL: true
healthChecker:
failureThreshold: 3
@@ -109,21 +111,24 @@ spec:
agent: prometheus.io/operator
prometheus:
exporter:
- port: 9091
+ port: 56790
serviceMonitor:
labels:
release: prometheus
interval: 10s
- version: 3.4.0
+ version: 3.6.1
```
### spec.version
-`spec.version` is a required field specifying the name of the [KafkaVersion](/docs/v2024.4.27/guides/kafka/concepts/catalog) crd where the docker images are specified. Currently, when you install KubeDB, it creates the following `Kafka` resources,
+`spec.version` is a required field specifying the name of the [KafkaVersion](/docs/v2024.4.27/guides/kafka/concepts/kafkaversion) crd where the docker images are specified. Currently, when you install KubeDB, it creates the following `Kafka` resources,
-- `3.3.0`
- `3.3.2`
-- `3.4.0`
+- `3.4.1`
+- `3.5.1`
+- `3.5.2`
+- `3.6.0`
+- `3.6.1`
### spec.replicas
@@ -178,6 +183,10 @@ type: Opaque
Secrets provided by users are not managed by KubeDB, and therefore, won't be modified or garbage collected by the KubeDB operator (version 0.13.0 and higher).
+### spec.configSecret
+
+`spec.configSecret` is an optional field that points to a Secret used to hold custom Kafka configuration. If not set, KubeDB operator will use default configuration for Kafka.
+
### spec.topology
`spec.topology` represents the topology configuration for Kafka cluster in KRaft mode.
@@ -251,17 +260,15 @@ spec:
The `spec.tls` contains the following fields:
-- `tls.issuerRef` - is an `optional` field that references to the `Issuer` or `ClusterIssuer` custom resource object of [cert-manager](https://cert-manager.io/docs/concepts/issuer/). It is used to generate the necessary certificate secrets for Elasticsearch. If the `issuerRef` is not specified, the operator creates a self-signed CA and also creates necessary certificate (valid: 365 days) secrets using that CA.
+- `tls.issuerRef` - is an `optional` field that references to the `Issuer` or `ClusterIssuer` custom resource object of [cert-manager](https://cert-manager.io/docs/concepts/issuer/). It is used to generate the necessary certificate secrets for Kafka. If the `issuerRef` is not specified, the operator creates a self-signed CA and also creates necessary certificate (valid: 365 days) secrets using that CA.
- `apiGroup` - is the group name of the resource that is being referenced. Currently, the only supported value is `cert-manager.io`.
- `kind` - is the type of resource that is being referenced. The supported values are `Issuer` and `ClusterIssuer`.
- `name` - is the name of the resource ( `Issuer` or `ClusterIssuer` ) that is being referenced.
- `tls.certificates` - is an `optional` field that specifies a list of certificate configurations used to configure the certificates. It has the following fields:
- `alias` - represents the identifier of the certificate. It has the following possible value:
- - `transport` - is used for the transport layer certificate configuration.
- - `http` - is used for the HTTP layer certificate configuration.
- - `admin` - is used for the admin certificate configuration. Available for the `SearchGuard` and the `OpenDistro` auth-plugins.
- - `metrics-exporter` - is used for the metrics-exporter sidecar certificate configuration.
+ - `server` - is used for the server certificate configuration.
+ - `client` - is used for the client certificate configuration.
- `secretName` - ( `string` | `"-alias-cert"` ) - specifies the k8s secret name that holds the certificates.
@@ -321,10 +328,9 @@ KubeDB accept following fields to set in `spec.podTemplate:`
- annotations (statefulset's annotation)
- labels (statefulset's labels)
- spec:
- - args
- - env
- resources
- initContainers
+ - containers
- imagePullSecrets
- nodeSelector
- affinity
@@ -338,18 +344,10 @@ KubeDB accept following fields to set in `spec.podTemplate:`
- readinessProbe
- lifecycle
-You can check out the full list [here](https://github.com/kmodules/offshoot-api/blob/ea366935d5bad69d7643906c7556923271592513/api/v1/types.go#L42-L259). Uses of some field of `spec.podTemplate` is described below,
+You can check out the full list [here](https://github.com/kmodules/offshoot-api/blob/39bf8b2/api/v2/types.go#L44-L279). Uses of some field of `spec.podTemplate` is described below,
NB. If `spec.topology` is set, then `spec.podTemplate` needs to be empty. Instead use `spec.topology..podTemplate`
-#### spec.podTemplate.spec.args
-
-`spec.podTemplate.spec.args` is an optional field. This can be used to provide additional arguments to database installation.
-
-#### spec.podTemplate.spec.env
-
-`spec.podTemplate.spec.env` is an optional field that specifies the environment variables to pass to the Kafka docker image.
-
#### spec.podTemplate.spec.nodeSelector
`spec.podTemplate.spec.nodeSelector` is an optional field that specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). To learn more, see [here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) .
@@ -401,10 +399,10 @@ Know details about KubeDB Health checking from this [blog post](https://appscode
## Next Steps
-- Learn how to use KubeDB to run a Apache Kafka cluster [here](/docs/v2024.4.27/guides/kafka/README).
+- Learn how to use KubeDB to run Apache Kafka cluster [here](/docs/v2024.4.27/guides/kafka/README).
- Deploy [dedicated topology cluster](/docs/v2024.4.27/guides/kafka/clustering/topology-cluster/) for Apache Kafka
- Deploy [combined cluster](/docs/v2024.4.27/guides/kafka/clustering/combined-cluster/) for Apache Kafka
- Monitor your Kafka cluster with KubeDB using [`out-of-the-box` Prometheus operator](/docs/v2024.4.27/guides/kafka/monitoring/using-prometheus-operator).
-- Detail concepts of [KafkaVersion object](/docs/v2024.4.27/guides/kafka/concepts/catalog).
+- Detail concepts of [KafkaVersion object](/docs/v2024.4.27/guides/kafka/concepts/kafkaversion).
- Learn to use KubeDB managed Kafka objects using [CLIs](/docs/v2024.4.27/guides/kafka/cli/cli).
- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.4.27/CONTRIBUTING).
diff --git a/content/docs/v2024.4.27/guides/kafka/concepts/kafkaconnectorversion.md b/content/docs/v2024.4.27/guides/kafka/concepts/kafkaconnectorversion.md
new file mode 100644
index 0000000000..fe60ee6ffb
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/kafka/concepts/kafkaconnectorversion.md
@@ -0,0 +1,102 @@
+---
+title: KafkaConnectorVersion CRD
+menu:
+ docs_v2024.4.27:
+ identifier: kf-kafkaconnectorversion-concepts
+ name: KafkaConnectorVersion
+ parent: kf-concepts-kafka
+ weight: 30
+menu_name: docs_v2024.4.27
+section_menu_id: guides
+info:
+ autoscaler: v0.30.0
+ cli: v0.45.0
+ dashboard: v0.21.0
+ installer: v2024.4.27
+ ops-manager: v0.32.0
+ provisioner: v0.45.0
+ schema-manager: v0.21.0
+ ui-server: v0.21.0
+ version: v2024.4.27
+ webhook-server: v0.21.0
+---
+
+> New to KubeDB? Please start [here](/docs/v2024.4.27/README).
+
+# KafkaConnectorVersion
+
+## What is KafkaConnectorVersion
+
+`KafkaConnectorVersion` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration to specify the docker images to be used for install Connector plugins to ConnectCluster worker node with KubeDB in a Kubernetes native way.
+
+When you install KubeDB, a `KafkaConnectorVersion` custom resource will be created automatically for every supported Kafka Connector versions. You have to specify list of `KafkaConnectorVersion` CR names in `spec.connectorPlugins` field of [ConnectCluster](/docs/v2024.4.27/guides/kafka/concepts/kafka) cr. Then, KubeDB will use the docker images specified in the `KafkaConnectorVersion` cr to install your connector plugins.
+
+Using a separate CR for specifying respective docker images and policies independent of KubeDB operator. This will also allow the users to use a custom image for the connector plugins.
+
+## KafkaConnectorVersion Spec
+
+As with all other Kubernetes objects, a KafkaVersion needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section.
+
+```yaml
+apiVersion: catalog.kubedb.com/v1alpha1
+kind: KafkaConnectorVersion
+metadata:
+ annotations:
+ meta.helm.sh/release-name: kubedb
+ meta.helm.sh/release-namespace: kubedb
+ creationTimestamp: "2024-05-02T06:38:17Z"
+ generation: 1
+ labels:
+ app.kubernetes.io/instance: kubedb
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/name: kubedb-catalog
+ app.kubernetes.io/version: v2024.4.27
+ helm.sh/chart: kubedb-catalog-v2024.4.27
+ name: mongodb-1.11.0
+ resourceVersion: "2873"
+ uid: a5808f31-9d27-4979-8a7d-f3357dbba6ba
+spec:
+ connectorPlugin:
+ image: ghcr.io/appscode-images/kafka-connector-mongodb:1.11.0
+ securityContext:
+ runAsUser: 1001
+ type: MongoDB
+ version: 1.11.0
+```
+
+### metadata.name
+
+`metadata.name` is a required field that specifies the name of the `KafkaConnectorVersion` CR. You have to specify this name in `spec.connectorPlugins` field of ConnectCluster CR.
+
+We follow this convention for naming KafkaConnectorVersion CR:
+
+- Name format: `{Plugin-Type}-{version}`
+
+### spec.version
+
+`spec.version` is a required field that specifies the original version of Connector plugin that has been used to build the docker image specified in `spec.connectorPlugin.image` field.
+
+### spec.deprecated
+
+`spec.deprecated` is an optional field that specifies whether the docker images specified here is supported by the current KubeDB operator.
+
+The default value of this field is `false`. If `spec.deprecated` is set to `true`, KubeDB operator will skip processing this CRD object and will add a event to the CRD object specifying that the DB version is deprecated.
+
+### spec.connectorPlugin.image
+
+`spec.connectorPlugin.image` is a required field that specifies the docker image which will be used to install connector plugin by KubeDB operator.
+
+```bash
+helm upgrade -i kubedb oci://ghcr.io/appscode-charts/kubedb \
+ --namespace kubedb --create-namespace \
+ --set additionalPodSecurityPolicies[0]=custom-db-policy \
+ --set additionalPodSecurityPolicies[1]=custom-snapshotter-policy \
+ --set-file global.license=/path/to/the/license.txt \
+ --wait --burst-limit=10000 --debug
+```
+
+## Next Steps
+
+- Learn about Kafka CRD [here](/docs/v2024.4.27/guides/kafka/concepts/kafka).
+- Learn about ConnectCluster CRD [here](/docs/v2024.4.27/guides/kafka/concepts/connectcluster).
+- Deploy your first ConnectCluster with KubeDB by following the guide [here](/docs/v2024.4.27/guides/kafka/quickstart/overview/connectcluster/).
diff --git a/content/docs/v2024.4.27/guides/kafka/concepts/catalog.md b/content/docs/v2024.4.27/guides/kafka/concepts/kafkaversion.md
similarity index 71%
rename from content/docs/v2024.4.27/guides/kafka/concepts/catalog.md
rename to content/docs/v2024.4.27/guides/kafka/concepts/kafkaversion.md
index 1f5a65cae7..1e6c3c4009 100644
--- a/content/docs/v2024.4.27/guides/kafka/concepts/catalog.md
+++ b/content/docs/v2024.4.27/guides/kafka/concepts/kafkaversion.md
@@ -5,7 +5,7 @@ menu:
identifier: kf-catalog-concepts
name: KafkaVersion
parent: kf-concepts-kafka
- weight: 15
+ weight: 25
menu_name: docs_v2024.4.27
section_menu_id: guides
info:
@@ -29,9 +29,9 @@ info:
`KafkaVersion` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration to specify the docker images to be used for [Kafka](https://kafka.apache.org) database deployed with KubeDB in a Kubernetes native way.
-When you install KubeDB, a `KafkaVersion` custom resource will be created automatically for every supported Kafka versions. You have to specify the name of `KafkaVersion` crd in `spec.version` field of [Kafka](/docs/v2024.4.27/guides/kafka/concepts/kafka) crd. Then, KubeDB will use the docker images specified in the `KafkaVersion` crd to create your expected database.
+When you install KubeDB, a `KafkaVersion` custom resource will be created automatically for every supported Kafka versions. You have to specify the name of `KafkaVersion` CR in `spec.version` field of [Kafka](/docs/v2024.4.27/guides/kafka/concepts/kafka) crd. Then, KubeDB will use the docker images specified in the `KafkaVersion` CR to create your expected database.
-Using a separate crd for specifying respective docker images, and pod security policy names allow us to modify the images, and policies independent of KubeDB operator.This will also allow the users to use a custom image for the database.
+Using a separate CRD for specifying respective docker images, and pod security policy names allow us to modify the images, and policies independent of KubeDB operator.This will also allow the users to use a custom image for the database.
## KafkaVersion Spec
@@ -42,40 +42,42 @@ apiVersion: catalog.kubedb.com/v1alpha1
kind: KafkaVersion
metadata:
annotations:
- meta.helm.sh/release-name: kubedb-catalog
+ meta.helm.sh/release-name: kubedb
meta.helm.sh/release-namespace: kubedb
- creationTimestamp: "2023-03-23T10:15:24Z"
- generation: 2
+ creationTimestamp: "2024-05-02T06:38:17Z"
+ generation: 1
labels:
- app.kubernetes.io/instance: kubedb-catalog
+ app.kubernetes.io/instance: kubedb
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubedb-catalog
- app.kubernetes.io/version: v2023.02.28
- helm.sh/chart: kubedb-catalog-v2023.02.28
- name: 3.4.0
- resourceVersion: "472767"
- uid: 36a167a3-5218-4e32-b96d-d6b5b0c86125
+ app.kubernetes.io/version: v2024.4.27
+ helm.sh/chart: kubedb-catalog-v2024.4.27
+ name: 3.6.1
+ resourceVersion: "2881"
+ uid: 778fb80c-b37a-4ac6-bfaa-fec83e5f49c7
spec:
connectCluster:
image: ghcr.io/appscode-images/kafka-connect-cluster:3.6.1
+ cruiseControl:
+ image: ghcr.io/appscode-images/kafka-cruise-control:3.6.1
db:
- image: kubedb/kafka-kraft:3.4.0
+ image: ghcr.io/appscode-images/kafka-kraft:3.6.1
podSecurityPolicies:
databasePolicyName: kafka-db
- version: 3.4.0
- cruiseControl:
- image: ghcr.io/kubedb/cruise-control:3.4.0
+ securityContext:
+ runAsUser: 1001
+ version: 3.6.1
```
### metadata.name
-`metadata.name` is a required field that specifies the name of the `KafkaVersion` crd. You have to specify this name in `spec.version` field of [Kafka](/docs/v2024.4.27/guides/kafka/concepts/kafka) crd.
+`metadata.name` is a required field that specifies the name of the `KafkaVersion` CR. You have to specify this name in `spec.version` field of [Kafka](/docs/v2024.4.27/guides/kafka/concepts/kafka) CR.
-We follow this convention for naming KafkaVersion crd:
+We follow this convention for naming KafkaVersion CR:
- Name format: `{Original Kafka image version}-{modification tag}`
-We use official Apache Kafka release tar files to build docker images for supporting Kafka versions and re-tag the image with v1, v2 etc. modification tag when there's any. An image with higher modification tag will have more features than the images with lower modification tag. Hence, it is recommended to use KafkaVersion crd with the highest modification tag to enjoy the latest features.
+We use official Apache Kafka release tar files to build docker images for supporting Kafka versions and re-tag the image with v1, v2 etc. modification tag when there's any. An image with higher modification tag will have more features than the images with lower modification tag. Hence, it is recommended to use KafkaVersion CR with the highest modification tag to enjoy the latest features.
### spec.version
@@ -91,6 +93,14 @@ The default value of this field is `false`. If `spec.deprecated` is set to `true
`spec.db.image` is a required field that specifies the docker image which will be used to create StatefulSet by KubeDB operator to create expected Kafka database.
+### spec.cruiseControl.image
+
+`spec.cruiseControl.image` is a required field that specifies the docker image which will be used to create Deployment by KubeDB operator to create expected Kafka Cruise Control.
+
+### spec.connectCluster.image
+
+`spec.connectCluster.image` is a required field that specifies the docker image which will be used to create StatefulSet by KubeDB operator to create expected Kafka Connect Cluster.
+
+
+
+
+
+
+## User Guide
+
+- [Quickstart SingleStore](/docs/v2024.4.27/guides/singlestore/quickstart/quickstart) with KubeDB Operator.
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.4.27/CONTRIBUTING).
\ No newline at end of file
diff --git a/content/docs/v2024.4.27/guides/singlestore/_index.md b/content/docs/v2024.4.27/guides/singlestore/_index.md
new file mode 100644
index 0000000000..02566a9a7a
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/singlestore/_index.md
@@ -0,0 +1,22 @@
+---
+title: SingleStore
+menu:
+ docs_v2024.4.27:
+ identifier: guides-singlestore
+ name: SingleStore
+ parent: guides
+ weight: 40
+menu_name: docs_v2024.4.27
+info:
+ autoscaler: v0.30.0
+ cli: v0.45.0
+ dashboard: v0.21.0
+ installer: v2024.4.27
+ ops-manager: v0.32.0
+ provisioner: v0.45.0
+ schema-manager: v0.21.0
+ ui-server: v0.21.0
+ version: v2024.4.27
+ webhook-server: v0.21.0
+---
+
diff --git a/content/docs/v2024.4.27/guides/singlestore/images/singlestore-lifecycle.png b/content/docs/v2024.4.27/guides/singlestore/images/singlestore-lifecycle.png
new file mode 100644
index 0000000000..edf0f03569
Binary files /dev/null and b/content/docs/v2024.4.27/guides/singlestore/images/singlestore-lifecycle.png differ
diff --git a/content/docs/v2024.4.27/guides/singlestore/quickstart/_index.md b/content/docs/v2024.4.27/guides/singlestore/quickstart/_index.md
new file mode 100644
index 0000000000..bdfd08d571
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/singlestore/quickstart/_index.md
@@ -0,0 +1,22 @@
+---
+title: SingleStore Quickstart
+menu:
+ docs_v2024.4.27:
+ identifier: sdb-quickstart-singlestore
+ name: Quickstart
+ parent: guides-singlestore
+ weight: 15
+menu_name: docs_v2024.4.27
+info:
+ autoscaler: v0.30.0
+ cli: v0.45.0
+ dashboard: v0.21.0
+ installer: v2024.4.27
+ ops-manager: v0.32.0
+ provisioner: v0.45.0
+ schema-manager: v0.21.0
+ ui-server: v0.21.0
+ version: v2024.4.27
+ webhook-server: v0.21.0
+---
+
diff --git a/content/docs/v2024.4.27/guides/singlestore/quickstart/images/singlestore-lifecycle.png b/content/docs/v2024.4.27/guides/singlestore/quickstart/images/singlestore-lifecycle.png
new file mode 100644
index 0000000000..edf0f03569
Binary files /dev/null and b/content/docs/v2024.4.27/guides/singlestore/quickstart/images/singlestore-lifecycle.png differ
diff --git a/content/docs/v2024.4.27/guides/singlestore/quickstart/images/studio-1.png b/content/docs/v2024.4.27/guides/singlestore/quickstart/images/studio-1.png
new file mode 100644
index 0000000000..43fce07685
Binary files /dev/null and b/content/docs/v2024.4.27/guides/singlestore/quickstart/images/studio-1.png differ
diff --git a/content/docs/v2024.4.27/guides/singlestore/quickstart/images/studio-2.png b/content/docs/v2024.4.27/guides/singlestore/quickstart/images/studio-2.png
new file mode 100644
index 0000000000..7c89425291
Binary files /dev/null and b/content/docs/v2024.4.27/guides/singlestore/quickstart/images/studio-2.png differ
diff --git a/content/docs/v2024.4.27/guides/singlestore/quickstart/quickstart.md b/content/docs/v2024.4.27/guides/singlestore/quickstart/quickstart.md
new file mode 100644
index 0000000000..aa7ab39e3d
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/singlestore/quickstart/quickstart.md
@@ -0,0 +1,588 @@
+---
+title: SingleStore Quickstart
+menu:
+ docs_v2024.4.27:
+ identifier: sdb-quickstart-quickstart
+ name: Overview
+ parent: sdb-quickstart-singlestore
+ weight: 15
+menu_name: docs_v2024.4.27
+section_menu_id: guides
+info:
+ autoscaler: v0.30.0
+ cli: v0.45.0
+ dashboard: v0.21.0
+ installer: v2024.4.27
+ ops-manager: v0.32.0
+ provisioner: v0.45.0
+ schema-manager: v0.21.0
+ ui-server: v0.21.0
+ version: v2024.4.27
+ webhook-server: v0.21.0
+---
+
+> New to KubeDB? Please start [here](/docs/v2024.4.27/README).
+
+# SingleStore QuickStart
+
+This tutorial will show you how to use KubeDB to run a SingleStore database.
+
+
+
+
+
+> Note: The yaml files used in this tutorial are stored in [docs/guides/singlestore/quickstart/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/singlestore/quickstart/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs).
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.4.27/setup/README) and make sure install with helm command including `--set global.featureGates.Singlestore=true` to ensure SingleStore crd.
+- [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) is required to run KubeDB. Check the available StorageClass in cluster.
+
+ ```bash
+ $ kubectl get storageclasses
+ NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
+ standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 6h22m
+ ```
+
+- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial.
+
+ ```bash
+ $ kubectl create ns demo
+ namespace/demo created
+ ```
+
+## Find Available SingleStoreVersion
+
+When you have installed KubeDB, it has created `SinglestoreVersion` crd for all supported SingleStore versions. Check it by using the `kubectl get singlestoreversions` command. You can also use `sdbv` shorthand instead of `singlestoreversions`.
+
+```bash
+$ kubectl get singlestoreversions
+NAME VERSION DB_IMAGE DEPRECATED AGE
+8.1.32 8.1.32 ghcr.io/appscode-images/singlestore-node:alma-8.1.32-e3d3cde6da 72m
+8.5.7 8.5.7 ghcr.io/appscode-images/singlestore-node:alma-8.5.7-bf633c1a54 72m
+
+```
+## Create SingleStore License Secret
+
+We need SingleStore License to create SingleStore Database. So, Ensure that you have acquired a license and then simply pass the license by secret.
+
+```bash
+$ kubectl create secret generic -n demo license-secret \
+ --from-literal=username=license \
+ --from-literal=password='your-license-set-here'
+secret/license-secret created
+```
+
+## Create a SingleStore database
+
+KubeDB implements a `Singlestore` CRD to define the specification of a SingleStore database. Below is the `Singlestore` object created in this tutorial.
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sdb-quickstart
+ namespace: demo
+spec:
+ version: "8.5.7"
+ topology:
+ aggregator:
+ replicas: 1
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "0.5"
+ requests:
+ memory: "2Gi"
+ cpu: "0.5"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "0.5"
+ requests:
+ memory: "2Gi"
+ cpu: "0.5"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ terminationPolicy: WipeOut
+ serviceTemplates:
+ - alias: primary
+ spec:
+ type: LoadBalancer
+ ports:
+ - name: http
+ port: 9999
+```
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/singlestore/quickstart/yamls/quickstart.yaml
+singlestore.kubedb.com/sdb-quickstart created
+```
+Here,
+
+- `spec.version` is the name of the SinglestoreVersion CRD where the docker images are specified. In this tutorial, a SingleStore `8.5.37` database is going to be created.
+- `spec.topology` specifies that it will be used as cluster mode. If this field is nil it will be work as standalone mode.
+- `spec.topology.aggregator.replicas` or `spec.topology.leaf.replicas` specifies that the number replicas that will be used for aggregator or leaf.
+- `spec.storageType` specifies the type of storage that will be used for SingleStore database. It can be `Durable` or `Ephemeral`. Default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create SingleStore database using `EmptyDir` volume. In this case, you don't have to specify `spec.storage` field. This is useful for testing purposes.
+- `spec.topology.aggregator.storage` or `spec.topology.leaf.storage` specifies the StorageClass of PVC dynamically allocated to store data for this database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. You can specify any StorageClass available in your cluster with appropriate resource requests.
+- `spec.terminationPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `Singlestore` crd or which resources KubeDB should keep or delete when you delete `Singlestore` crd. If admission webhook is enabled, It prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`. Learn details of all `TerminationPolicy` [here](/docs/v2024.4.27/guides/mysql/concepts/database/#specterminationpolicy)
+
+> Note: `spec.storage` section is used to create PVC for database pod. It will create PVC with storage size specified in `storage.resources.requests` field. Don't specify limits here. PVC does not get resized automatically.
+
+KubeDB operator watches for `Singlestore` objects using Kubernetes api. When a `Singlestore` object is created, KubeDB operator will create new StatefulSet and Service with the matching SingleStore object name. KubeDB operator will also create a governing service for StatefulSets, if one is not already present.
+
+```bash
+$ kubectl get petset -n demo
+NAME READY AGE
+sdb-quickstart-leaf 2/2 33s
+sdb-quickstart-aggregator 1/1 37s
+$ kubectl get pvc -n demo
+NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
+data-sdb-quickstart-leaf-0 Bound pvc-4f45c51b-47d4-4254-8275-782bf3588667 10Gi RWO standard 42s
+data-sdb-quickstart-leaf-1 Bound pvc-769e68f4-80a9-4e3e-b2bc-e974534b9dee 10Gi RWO standard 35s
+data-sdb-quickstart-aggregator-0 Bound pvc-75057e3d-e1d7-4770-905b-6049f2edbcde 1Gi RWO standard 46s
+$ kubectl get pv -n demo
+NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
+pvc-4f45c51b-47d4-4254-8275-782bf3588667 10Gi RWO Delete Bound demo/data-sdb-quickstart-leaf-0 standard 87s
+pvc-75057e3d-e1d7-4770-905b-6049f2edbcde 1Gi RWO Delete Bound demo/data-sdb-quickstart-master-aggregator-0 standard 91s
+pvc-769e68f4-80a9-4e3e-b2bc-e974534b9dee 10Gi RWO Delete Bound demo/data-sdb-quickstart-leaf-1 standard 80s
+$ kubectl get service -n demo
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+sdb-quickstart LoadBalancer 10.96.27.144 192.10.25.36 3306:32076/TCP,8081:30910/TCP 2m1s
+sdb-quickstart-pods ClusterIP None 3306/TCP 2m1s
+
+```
+
+KubeDB operator sets the `status.phase` to `Running` once the database is successfully created. Run the following command to see the modified Singlestore object:
+
+```yaml
+➤ kubectl get sdb -n demo sdb-quickstart -oyaml
+
+ apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ annotations:
+ kubectl.kubernetes.io/last-applied-configuration: |
+ {"apiVersion":"kubedb.com/v1alpha2","kind":"Singlestore","metadata":{"annotations":{},"name":"sdb-quickstart","namespace":"demo"},"spec":{"licenseSecret":{"name":"license-secret"},"serviceTemplates":[{"alias":"primary","spec":{"ports":[{"name":"http","port":9999}],"type":"LoadBalancer"}}],"storageType":"Durable","terminationPolicy":"WipeOut","topology":{"aggregator":{"podTemplate":{"spec":{"containers":[{"name":"singlestore","resources":{"limits":{"cpu":"0.5","memory":"2Gi"},"requests":{"cpu":"0.5","memory":"2Gi"}}}]}},"replicas":1,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"}},"leaf":{"podTemplate":{"spec":{"containers":[{"name":"singlestore","resources":{"limits":{"cpu":"0.5","memory":"2Gi"},"requests":{"cpu":"0.5","memory":"2Gi"}}}]}},"replicas":2,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"10Gi"}},"storageClassName":"standard"}}},"version":"8.5.7"}}
+ creationTimestamp: "2024-05-06T06:52:58Z"
+ finalizers:
+ - kubedb.com
+ generation: 2
+ name: sdb-quickstart
+ namespace: demo
+ resourceVersion: "448498"
+ uid: 29d6a814-e801-45b5-8217-b59fc77d84e5
+spec:
+ authSecret:
+ name: sdb-quickstart-root-cred
+ healthChecker:
+ failureThreshold: 1
+ periodSeconds: 10
+ timeoutSeconds: 10
+ licenseSecret:
+ name: license-secret
+ podPlacementPolicy:
+ name: default
+ serviceTemplates:
+ - alias: primary
+ metadata: {}
+ spec:
+ ports:
+ - name: http
+ port: 9999
+ type: LoadBalancer
+ storageType: Durable
+ terminationPolicy: WipeOut
+ topology:
+ aggregator:
+ podPlacementPolicy:
+ name: default
+ podTemplate:
+ controller: {}
+ metadata: {}
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ cpu: 500m
+ memory: 2Gi
+ requests:
+ cpu: 500m
+ memory: 2Gi
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - ALL
+ runAsGroup: 998
+ runAsNonRoot: true
+ runAsUser: 999
+ seccompProfile:
+ type: RuntimeDefault
+ - name: singlestore-coordinator
+ resources:
+ limits:
+ memory: 256Mi
+ requests:
+ cpu: 200m
+ memory: 256Mi
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - ALL
+ runAsGroup: 998
+ runAsNonRoot: true
+ runAsUser: 999
+ seccompProfile:
+ type: RuntimeDefault
+ initContainers:
+ - name: singlestore-init
+ resources:
+ limits:
+ memory: 512Mi
+ requests:
+ cpu: 200m
+ memory: 512Mi
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - ALL
+ runAsGroup: 998
+ runAsNonRoot: true
+ runAsUser: 999
+ seccompProfile:
+ type: RuntimeDefault
+ securityContext:
+ fsGroup: 999
+ replicas: 1
+ storage:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ storageClassName: standard
+ leaf:
+ podPlacementPolicy:
+ name: default
+ podTemplate:
+ controller: {}
+ metadata: {}
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ cpu: 500m
+ memory: 2Gi
+ requests:
+ cpu: 500m
+ memory: 2Gi
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - ALL
+ runAsGroup: 998
+ runAsNonRoot: true
+ runAsUser: 999
+ seccompProfile:
+ type: RuntimeDefault
+ - name: singlestore-coordinator
+ resources:
+ limits:
+ memory: 256Mi
+ requests:
+ cpu: 200m
+ memory: 256Mi
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - ALL
+ runAsGroup: 998
+ runAsNonRoot: true
+ runAsUser: 999
+ seccompProfile:
+ type: RuntimeDefault
+ initContainers:
+ - name: singlestore-init
+ resources:
+ limits:
+ memory: 512Mi
+ requests:
+ cpu: 200m
+ memory: 512Mi
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - ALL
+ runAsGroup: 998
+ runAsNonRoot: true
+ runAsUser: 999
+ seccompProfile:
+ type: RuntimeDefault
+ securityContext:
+ fsGroup: 999
+ replicas: 2
+ storage:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ storageClassName: standard
+ version: 8.5.7
+status:
+ conditions:
+ - lastTransitionTime: "2024-05-06T06:53:06Z"
+ message: 'The KubeDB operator has started the provisioning of Singlestore: demo/sdb-quickstart'
+ observedGeneration: 2
+ reason: DatabaseProvisioningStartedSuccessfully
+ status: "True"
+ type: ProvisioningStarted
+ - lastTransitionTime: "2024-05-06T06:56:05Z"
+ message: All Aggregator replicas are ready for Singlestore demo/sdb-quickstart
+ observedGeneration: 2
+ reason: AllReplicasReady
+ status: "True"
+ type: ReplicaReady
+ - lastTransitionTime: "2024-05-06T06:54:17Z"
+ message: database demo/sdb-quickstart is accepting connection
+ observedGeneration: 2
+ reason: AcceptingConnection
+ status: "True"
+ type: AcceptingConnection
+ - lastTransitionTime: "2024-05-06T06:54:17Z"
+ message: database demo/sdb-quickstart is ready
+ observedGeneration: 2
+ reason: AllReplicasReady
+ status: "True"
+ type: Ready
+ - lastTransitionTime: "2024-05-06T06:54:18Z"
+ message: 'The Singlestore: demo/sdb-quickstart is successfully provisioned.'
+ observedGeneration: 2
+ reason: DatabaseSuccessfullyProvisioned
+ status: "True"
+ type: Provisioned
+ phase: Ready
+
+
+```
+
+## Connect with SingleStore database
+
+KubeDB operator has created a new Secret called `sdb-quickstart-root-cred` *(format: {singlestore-object-name}-root-cred)* for storing the password for `singlestore` superuser. This secret contains a `username` key which contains the *username* for SingleStore superuser and a `password` key which contains the *password* for SingleStore superuser.
+
+If you want to use an existing secret please specify that when creating the SingleStore object using `spec.authSecret.name`. While creating this secret manually, make sure the secret contains these two keys containing data `username` and `password` and also make sure of using `root` as value of `username`. For more details see [here](/docs/v2024.4.27/guides/mysql/concepts/database/#specdatabasesecret).
+
+Now, we need `username` and `password` to connect to this database from `kubectl exec` command. In this example `sdb-quickstart-root-cred` secret holds username and password
+
+```bash
+$ kubectl get pod -n demo sdb-quickstart-master-aggregator-0 -oyaml | grep podIP
+ podIP: 10.244.0.14
+$ kubectl get secrets -n demo sdb-quickstart-root-cred -o jsonpath='{.data.\username}' | base64 -d
+ root
+$ kubectl get secrets -n demo sdb-quickstart-root-cred -o jsonpath='{.data.\password}' | base64 -d
+ J0h_BUdJB8mDO31u
+```
+we will exec into the pod `sdb-quickstart-master-aggregator-0` and connect to the database using username and password
+
+```bash
+$ kubectl exec -it -n demo sdb-quickstart-aggregator-0 -- bash
+ Defaulting container name to singlestore.
+ Use 'kubectl describe pod/sdb-quickstart-aggregator-0 -n demo' to see all of the containers in this pod.
+
+ [memsql@sdb-quickstart-master-aggregator-0 /]$ memsql -uroot -p"J0h_BUdJB8mDO31u"
+ singlestore-client: [Warning] Using a password on the command line interface can be insecure.
+ Welcome to the MySQL monitor. Commands end with ; or \g.
+ Your MySQL connection id is 1114
+ Server version: 5.7.32 SingleStoreDB source distribution (compatible; MySQL Enterprise & MySQL Commercial)
+
+ Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.
+
+ Oracle is a registered trademark of Oracle Corporation and/or its
+ affiliates. Other names may be trademarks of their respective
+ owners.
+
+ Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
+
+ singlestore> show databases;
+ +--------------------+
+ | Database |
+ +--------------------+
+ | cluster |
+ | information_schema |
+ | memsql |
+ | singlestore_health |
+ +--------------------+
+ 4 rows in set (0.00 sec)
+
+```
+You can also connect with database management tools like [singlestore-studio](https://docs.singlestore.com/db/v8.5/reference/singlestore-tools-reference/singlestore-studio/)
+
+You can simply access to SingleStore studio by forwarding the Primary service port to any of your localhost port. Or, Accessing through ExternalP's 8081 port is also an option.
+
+```bash
+$ kubectl port-forward -n demo service/sdb-quickstart 8081
+Forwarding from 127.0.0.1:8081 -> 8081
+Forwarding from [::1]:8081 -> 8081
+```
+Lets, open your browser and go to the http://localhost:8081 or with TLS https://localhost:8081 then click on `Add or Create Cluster` option.
+Then choose `Add Existing Cluster` and click on `next` and you will get an interface like that below:
+
+
+
+
+
+After giving the all information you can see like this below UI image.
+
+
+
+
+
+## Database TerminationPolicy
+
+This field is used to regulate the deletion process of the related resources when `Singlestore` object is deleted. User can set the value of this field according to their needs. The available options and their use case scenario is described below:
+
+**DoNotTerminate:**
+
+When `terminationPolicy` is set to `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` feature. If admission webhook is enabled, It prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`. You can see this below:
+
+```bash
+$ kubectl delete sdb sdb-quickstart -n demo
+The Singlestore "sdb-quickstart" is invalid: spec.terminationPolicy: Invalid value: "sdb-quickstart": Can not delete as terminationPolicy is set to "DoNotTerminate"
+```
+
+Now, run `kubectl patch -n demo sdb sdb-quickstart -p '{"spec":{"terminationPolicy":"Halt"}}' --type="merge"` to set `spec.terminationPolicy` to `Halt` (which deletes the singlestore object and keeps PVC, snapshots, Secrets intact) or remove this field (which default to `Delete`). Then you will be able to delete/halt the database.
+
+Learn details of all `TerminationPolicy` [here](/docs/v2024.4.27/guides/mysql/concepts/database/#specterminationpolicy).
+
+**Halt:**
+
+Suppose you want to reuse your database volume and credential to deploy your database in future using the same configurations. But, right now you just want to delete the database except the database volumes and credentials. In this scenario, you must set the `Singlestore` object `terminationPolicy` to `Halt`.
+
+When the [TerminationPolicy](/docs/v2024.4.27/guides/mysql/concepts/database/#specterminationpolicy) is set to `halt` and the Singlestore object is deleted, the KubeDB operator will delete the StatefulSet and its pods but leaves the `PVCs`, `secrets` and database backup data(`snapshots`) intact. You can set the `terminationPolicy` to `halt` in existing database using `patch` command for testing.
+
+At first, run `kubectl patch -n demo sdb sdb-quickstart -p '{"spec":{"terminationPolicy":"Halt"}}' --type="merge"`. Then delete the singlestore object,
+
+```bash
+$ kubectl delete sdb sdb-quickstart -n demo
+singlestore.kubedb.com "sdb-quickstart" deleted
+```
+
+Now, run the following command to get all singlestore resources in `demo` namespaces,
+
+```bash
+$ kubectl get petset,svc,secret,pvc -n demo
+NAME TYPE DATA AGE
+secret/sdb-quickstart-root-cred kubernetes.io/basic-auth 2 3m35s
+
+NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
+persistentvolumeclaim/data-sdb-quickstart-leaf-0 Bound pvc-389f40a8-09bc-4724-aa52-94705d56ff77 1Gi RWO standard 3m18s
+persistentvolumeclaim/data-sdb-quickstart-leaf-1 Bound pvc-8dfbf04e-41a8-4cdd-ba14-7ad42d8701bb 1Gi RWO standard 3m11s
+persistentvolumeclaim/data-sdb-quickstart-master-aggregator-0 Bound pvc-c4f7d255-7307-4455-b195-70c71b81706f 1Gi RWO standard 3m29s
+
+```
+
+From the above output, you can see that all singlestore resources(`StatefulSet`, `Service`, etc.) are deleted except `PVC` and `Secret`. You can recreate your singlestore again using this resources.
+
+>You can also set the `terminationPolicy` to `Halt`(deprecated). It's behavior same as `halt` and right now `Halt` is replaced by `Halt`.
+
+**Delete:**
+
+If you want to delete the existing database along with the volumes used, but want to restore the database from previously taken `snapshots` and `secrets` then you might want to set the `Singlestore` object `terminationPolicy` to `Delete`. In this setting, `StatefulSet` and the volumes will be deleted. If you decide to restore the database, you can do so using the snapshots and the credentials.
+
+When the [TerminationPolicy](/docs/v2024.4.27/guides/mysql/concepts/database/#specterminationpolicy) is set to `Delete` and the Singlestore object is deleted, the KubeDB operator will delete the StatefulSet and its pods along with PVCs but leaves the `secret` and database backup data(`snapshots`) intact.
+
+Suppose, we have a database with `terminationPolicy` set to `Delete`. Now, are going to delete the database using the following command:
+
+```bash
+$ kubectl delete sdb sdb-quickstart -n demo
+singlestore.kubedb.com "sdb-quickstart" deleted
+```
+
+Now, run the following command to get all singlestore resources in `demo` namespaces,
+
+```bash
+$ kubectl get petset,svc,secret,pvc -n demo
+NAME TYPE DATA AGE
+secret/sdb-quickstart-root-cred kubernetes.io/basic-auth 2 17m
+
+```
+
+From the above output, you can see that all singlestore resources(`StatefulSet`, `Service`, `PVCs` etc.) are deleted except `Secret`.
+
+>If you don't set the terminationPolicy then the kubeDB set the TerminationPolicy to Delete by-default.
+
+**WipeOut:**
+
+You can totally delete the `Singlestore` database and relevant resources without any tracking by setting `terminationPolicy` to `WipeOut`. KubeDB operator will delete all relevant resources of this `Singlestore` database (i.e, `PVCs`, `Secrets`, `Snapshots`) when the `terminationPolicy` is set to `WipeOut`.
+
+Suppose, we have a database with `terminationPolicy` set to `WipeOut`. Now, are going to delete the database using the following command:
+
+```yaml
+$ kubectl delete sdb sdb-quickstart -n demo
+singlestore.kubedb.com "singlestore-quickstart" deleted
+```
+
+Now, run the following command to get all singlestore resources in `demo` namespaces,
+
+```bash
+$ kubectl get petset,svc,secret,pvc -n demo
+No resources found in demo namespace.
+```
+
+From the above output, you can see that all singlestore resources are deleted. There is no option to recreate/reinitialize your database if `terminationPolicy` is set to `Delete`.
+
+>Be careful when you set the `terminationPolicy` to `Delete`. Because there is no option to trace the database resources if once deleted the database.
+
+## Cleaning up
+
+To cleanup the Kubernetes resources created by this tutorial, run:
+
+```bash
+kubectl patch -n demo singlestore/sdb-quickstart -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge"
+kubectl delete -n demo singlestore/sdb-quickstart
+kubectl delete ns demo
+```
+
+## Tips for Testing
+
+If you are just testing some basic functionalities, you might want to avoid additional hassles due to some safety features that are great for production environment. You can follow these tips to avoid them.
+
+1. **Use `storageType: Ephemeral`**. Databases are precious. You might not want to lose your data in your production environment if database pod fail. So, we recommend to use `spec.storageType: Durable` and provide storage spec in `spec.storage` section. For testing purpose, you can just use `spec.storageType: Ephemeral`. KubeDB will use [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) for storage. You will not require to provide `spec.storage` section.
+2. **Use `terminationPolicy: WipeOut`**. It is nice to be able to delete everything created by KubeDB for a particular Singlestore crd when you delete the crd. For more details about termination policy, please visit [here](/docs/v2024.4.27/guides/mysql/concepts/database/#specterminationpolicy).
+
+## Next Steps
\ No newline at end of file
diff --git a/content/docs/v2024.4.27/guides/singlestore/quickstart/yamls/quickstart.yaml b/content/docs/v2024.4.27/guides/singlestore/quickstart/yamls/quickstart.yaml
new file mode 100644
index 0000000000..c6c91daec4
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/singlestore/quickstart/yamls/quickstart.yaml
@@ -0,0 +1,59 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Singlestore
+metadata:
+ name: sdb-quickstart
+ namespace: demo
+spec:
+ version: "8.5.7"
+ topology:
+ aggregator:
+ replicas: 1
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "0.5"
+ requests:
+ memory: "2Gi"
+ cpu: "0.5"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ leaf:
+ replicas: 2
+ podTemplate:
+ spec:
+ containers:
+ - name: singlestore
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "0.5"
+ requests:
+ memory: "2Gi"
+ cpu: "0.5"
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+ licenseSecret:
+ name: license-secret
+ storageType: Durable
+ terminationPolicy: WipeOut
+ serviceTemplates:
+ - alias: primary
+ spec:
+ type: LoadBalancer
+ ports:
+ - name: http
+ port: 9999
\ No newline at end of file
diff --git a/content/docs/v2024.4.27/guides/solr/README.md b/content/docs/v2024.4.27/guides/solr/README.md
new file mode 100644
index 0000000000..19da5fbc87
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/solr/README.md
@@ -0,0 +1,64 @@
+---
+title: Solr
+menu:
+ docs_v2024.4.27:
+ identifier: sl-readme-solr
+ name: Solr
+ parent: sl-solr-guides
+ weight: 10
+menu_name: docs_v2024.4.27
+section_menu_id: guides
+url: /docs/v2024.4.27/guides/solr/
+aliases:
+- /docs/v2024.4.27/guides/solr/README/
+info:
+ autoscaler: v0.30.0
+ cli: v0.45.0
+ dashboard: v0.21.0
+ installer: v2024.4.27
+ ops-manager: v0.32.0
+ provisioner: v0.45.0
+ schema-manager: v0.21.0
+ ui-server: v0.21.0
+ version: v2024.4.27
+ webhook-server: v0.21.0
+---
+
+> New to KubeDB? Please start [here](/docs/v2024.4.27/README).
+
+### Overview
+
+Solr is an open-source, Java-based, information retrieval library with support for limited relational, graph, statistical, data analysis or storage related use cases. Solr is designed to drive powerful document retrieval or analytical applications involving unstructured data, semi-structured data or a mix of unstructured and structured data. Solr is highly reliable, scalable and fault tolerant, providing distributed indexing, replication and load-balanced querying, automated failover and recovery, centralized configuration and more. Solr powers the search and navigation features of many of the world's largest internet sites.
+
+## Supported Solr Features
+| Features | Availability |
+|--------------------------------------|:------------:|
+| Clustering | ✓ |
+| Customized Docker Image | ✓ |
+| Authentication & Autorization | ✓ |
+| Reconfigurable Health Checker | ✓ |
+| Custom Configuration | ✓ |
+| Grafana Dashboards | ✓ |
+| Externally manageable Auth Secret | ✓ |
+| Persistent Volume | ✓ |
+| Monitoring with Prometheus & Grafana | ✓ |
+| Builtin Prometheus Discovery | ✓ |
+| Alert Dashboard | ✓ |
+| Using Prometheus operator | ✓ |
+| Dashboard ( Solr UI ) | ✓ |
+
+## Life Cycle of a Solr Object
+
+
+
+
+
+## User Guide
+
+- [Quickstart Solr](/docs/v2024.4.27/guides/solr/quickstart/overview/) with KubeDB Operator.
+- Detail Concept of [Solr Object](/docs/v2024.4.27/guides/solr/concepts/solr).
+
+
+## Next Steps
+
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.4.27/CONTRIBUTING).
\ No newline at end of file
diff --git a/content/docs/v2024.4.27/guides/solr/_index.md b/content/docs/v2024.4.27/guides/solr/_index.md
new file mode 100644
index 0000000000..f24e218df8
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/solr/_index.md
@@ -0,0 +1,22 @@
+---
+title: Solr
+menu:
+ docs_v2024.4.27:
+ identifier: sl-solr-guides
+ name: Solr
+ parent: guides
+ weight: 10
+menu_name: docs_v2024.4.27
+info:
+ autoscaler: v0.30.0
+ cli: v0.45.0
+ dashboard: v0.21.0
+ installer: v2024.4.27
+ ops-manager: v0.32.0
+ provisioner: v0.45.0
+ schema-manager: v0.21.0
+ ui-server: v0.21.0
+ version: v2024.4.27
+ webhook-server: v0.21.0
+---
+
diff --git a/content/docs/v2024.4.27/guides/solr/concepts/_index.md b/content/docs/v2024.4.27/guides/solr/concepts/_index.md
new file mode 100644
index 0000000000..4a842b1fb4
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/solr/concepts/_index.md
@@ -0,0 +1,22 @@
+---
+title: Solr Concepts
+menu:
+ docs_v2024.4.27:
+ identifier: sl-concepts-solr
+ name: Concepts
+ parent: sl-solr-guides
+ weight: 14
+menu_name: docs_v2024.4.27
+info:
+ autoscaler: v0.30.0
+ cli: v0.45.0
+ dashboard: v0.21.0
+ installer: v2024.4.27
+ ops-manager: v0.32.0
+ provisioner: v0.45.0
+ schema-manager: v0.21.0
+ ui-server: v0.21.0
+ version: v2024.4.27
+ webhook-server: v0.21.0
+---
+
diff --git a/content/docs/v2024.4.27/guides/solr/concepts/appbinding.md b/content/docs/v2024.4.27/guides/solr/concepts/appbinding.md
new file mode 100644
index 0000000000..aac266c766
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/solr/concepts/appbinding.md
@@ -0,0 +1,151 @@
+---
+title: AppBinding CRD
+menu:
+ docs_v2024.4.27:
+ identifier: sl-appbinding-solr
+ name: AppBinding
+ parent: sl-concepts-solr
+ weight: 30
+menu_name: docs_v2024.4.27
+section_menu_id: guides
+info:
+ autoscaler: v0.30.0
+ cli: v0.45.0
+ dashboard: v0.21.0
+ installer: v2024.4.27
+ ops-manager: v0.32.0
+ provisioner: v0.45.0
+ schema-manager: v0.21.0
+ ui-server: v0.21.0
+ version: v2024.4.27
+ webhook-server: v0.21.0
+---
+
+> New to KubeDB? Please start [here](/docs/v2024.4.27/README).
+
+# AppBinding
+
+## What is AppBinding
+
+An `AppBinding` is a Kubernetes `CustomResourceDefinition`(CRD) which points to an application using either its URL (usually for a non-Kubernetes resident service instance) or a Kubernetes service object (if self-hosted in a Kubernetes cluster), some optional parameters and a credential secret. To learn more about AppBinding and the problems it solves, please read this blog post: [The case for AppBinding](https://appscode.com/blog/post/the-case-for-appbinding).
+
+If you deploy a database using [KubeDB](https://kubedb.com/docs/0.11.0/concepts/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database.
+
+KubeDB uses [Stash](https://appscode.com/products/stash/) to perform backup/recovery of databases. Stash needs to know how to connect with a target database and the credentials necessary to access it. This is done via an `AppBinding`.
+
+## AppBinding CRD Specification
+
+Like any official Kubernetes resource, an `AppBinding` has `TypeMeta`, `ObjectMeta` and `Spec` sections. However, unlike other Kubernetes resources, it does not have a `Status` section.
+
+An `AppBinding` object created by `KubeDB` for Solr database is shown below,
+
+```yaml
+apiVersion: appcatalog.appscode.com/v1alpha1
+kind: AppBinding
+metadata:
+ annotations:
+ kubectl.kubernetes.io/last-applied-configuration: |
+ {"apiVersion":"kubedb.com/v1alpha2","kind":"Solr","metadata":{"annotations":{},"name":"solr-dev","namespace":"dev"},"spec":{"monitor":{"agent":"prometheus.io/builtin"},"replicas":3,"solrModules":["s3-repository","gcs-repository","prometheus-exporter"],"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"linode-block-storage"},"terminationPolicy":"Delete","version":"9.4.1","zookeeperRef":{"name":"zoo-dev","namespace":"dev"}}}
+ creationTimestamp: "2024-05-06T11:25:38Z"
+ generation: 1
+ labels:
+ app.kubernetes.io/component: database
+ app.kubernetes.io/instance: solr-dev
+ app.kubernetes.io/managed-by: kubedb.com
+ app.kubernetes.io/name: solrs.kubedb.com
+ name: solr-dev
+ namespace: dev
+ ownerReferences:
+ - apiVersion: kubedb.com/v1alpha2
+ blockOwnerDeletion: true
+ controller: true
+ kind: Solr
+ name: solr-dev
+ uid: a7cff8ba-8ab8-4d6c-8808-322f14a9f63d
+ resourceVersion: "26297"
+ uid: ad59a514-6026-47a5-b433-9291dc2da001
+spec:
+ appRef:
+ apiGroup: kubedb.com
+ kind: Solr
+ name: solr-dev
+ namespace: dev
+ clientConfig:
+ service:
+ name: solr-dev
+ port: 8983
+ scheme: http
+ secret:
+ name: solr-dev-admin-cred
+ type: kubedb.com/solr
+ version: 9.4.1
+```
+
+Here, we are going to describe the sections of an `AppBinding` crd.
+
+### AppBinding `Spec`
+
+An `AppBinding` object has the following fields in the `spec` section:
+
+#### spec.type
+
+`spec.type` is an optional field that indicates the type of the app that this `AppBinding` is pointing to. Stash uses this field to resolve the values of `TARGET_APP_TYPE`, `TARGET_APP_GROUP` and `TARGET_APP_RESOURCE` variables of [BackupBlueprint](https://appscode.com/products/stash/latest/concepts/crds/backupblueprint/) object.
+
+This field follows the following format: `/`. The above AppBinding is pointing to a `solr` resource under `kubedb.com` group.
+
+Here, the variables are parsed as follows:
+
+| Variable | Usage |
+| --------------------- |-------------------------------------------------------------------------------------------------------------------------------|
+| `TARGET_APP_GROUP` | Represents the application group where the respective app belongs (i.e: `kubedb.com`). |
+| `TARGET_APP_RESOURCE` | Represents the resource under that application group that this appbinding represents (i.e: `solr`). |
+| `TARGET_APP_TYPE` | Represents the complete type of the application. It's simply `TARGET_APP_GROUP/TARGET_APP_RESOURCE` (i.e: `kubedb.com/solr`). |
+
+#### spec.secret
+
+`spec.secret` specifies the name of the secret which contains the credentials that are required to access the database. This secret must be in the same namespace as the `AppBinding`.
+
+This secret must contain the following keys:
+
+| Key | Usage |
+| ---------- | ---------------------------------------------- |
+| `username` | Username of the target database. |
+| `password` | Password for the user specified by `username`. |
+
+
+
+#### spec.appRef
+appRef refers to the underlying application. It has 4 fields named `apiGroup`, `kind`, `name` & `namespace`.
+
+#### spec.clientConfig
+
+`spec.clientConfig` defines how to communicate with the target database. You can use either a URL or a Kubernetes service to connect with the database. You don't have to specify both of them.
+
+You can configure following fields in `spec.clientConfig` section:
+
+- **spec.clientConfig.url**
+
+ `spec.clientConfig.url` gives the location of the database, in standard URL form (i.e. `[scheme://]host:port/[path]`). This is particularly useful when the target database is running outside of the Kubernetes cluster. If your database is running inside the cluster, use `spec.clientConfig.service` section instead.
+
+> Note that, attempting to use a user or basic auth (e.g. `user:password@host:port`) is not allowed. Stash will insert them automatically from the respective secret. Fragments ("#...") and query parameters ("?...") are not allowed either.
+
+- **spec.clientConfig.service**
+
+ If you are running the database inside the Kubernetes cluster, you can use Kubernetes service to connect with the database. You have to specify the following fields in `spec.clientConfig.service` section if you manually create an `AppBinding` object.
+
+ - **name :** `name` indicates the name of the service that connects with the target database.
+ - **scheme :** `scheme` specifies the scheme (i.e. http, https) to use to connect with the database.
+ - **port :** `port` specifies the port where the target database is running.
+
+- **spec.clientConfig.insecureSkipTLSVerify**
+
+ `spec.clientConfig.insecureSkipTLSVerify` is used to disable TLS certificate verification while connecting with the database. We strongly discourage to disable TLS verification during backup. You should provide the respective CA bundle through `spec.clientConfig.caBundle` field instead.
+
+- **spec.clientConfig.caBundle**
+
+ `spec.clientConfig.caBundle` is a PEM encoded CA bundle which will be used to validate the serving certificate of the database.
+
+## Next Steps
+
+- Learn how to use KubeDB to manage various databases [here](/docs/v2024.4.27/guides/README).
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.4.27/CONTRIBUTING).
diff --git a/content/docs/v2024.4.27/guides/solr/concepts/solr.md b/content/docs/v2024.4.27/guides/solr/concepts/solr.md
new file mode 100644
index 0000000000..2f85687bcf
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/solr/concepts/solr.md
@@ -0,0 +1,316 @@
+---
+title: Solr CRD
+menu:
+ docs_v2024.4.27:
+ identifier: sl-solr-concepts
+ name: Solr
+ parent: sl-concepts-solr
+ weight: 10
+menu_name: docs_v2024.4.27
+section_menu_id: guides
+info:
+ autoscaler: v0.30.0
+ cli: v0.45.0
+ dashboard: v0.21.0
+ installer: v2024.4.27
+ ops-manager: v0.32.0
+ provisioner: v0.45.0
+ schema-manager: v0.21.0
+ ui-server: v0.21.0
+ version: v2024.4.27
+ webhook-server: v0.21.0
+---
+
+> New to KubeDB? Please start [here](/docs/v2024.4.27/README).
+
+# Solr
+
+## What is Solr
+
+`Solr` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [Solr](https://solr.apache.org/) in a Kubernetes native way. You only need to describe the desired database configuration in a Solr object, and the KubeDB operator will create Kubernetes objects in the desired state for you.
+
+## Solr Spec
+
+As with all other Kubernetes objects, a Solr needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. Below is an example Solr object.
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Solr
+metadata:
+ name: solr-cluster
+ namespace: demo
+spec:
+ authConfigSecret:
+ name: solr-cluster-auth-config
+ authSecret:
+ name: solr-cluster-admin-cred
+ healthChecker:
+ failureThreshold: 3
+ periodSeconds: 20
+ timeoutSeconds: 10
+ monitor:
+ agent: prometheus.io/operator
+ prometheus:
+ serviceMonitor:
+ interval: 10s
+ labels:
+ release: prometheus
+ serviceTemplates:
+ - alias: primary
+ metadata:
+ annotations:
+ passMe: ToService
+ spec:
+ type: NodePort
+ ports:
+ - name: http
+ port: 8983
+ storageType: Durable
+ terminationPolicy: Delete
+ topology:
+ coordinator:
+ replicas: 1
+ storage:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ storageClassName: standard
+ suffix: coordinator
+ data:
+ replicas: 2
+ storage:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ storageClassName: standard
+ suffix: data
+ overseer:
+ replicas: 1
+ storage:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ storageClassName: standard
+ suffix: overseer
+ version: 9.4.1
+ zookeeperDigestReadonlySecret:
+ name: solr-cluster-zk-digest-readonly
+ zookeeperDigestSecret:
+ name: solr-cluster-zk-digest
+ zookeeperRef:
+ name: zk-com
+ namespace: demo
+```
+
+
+### spec.version
+
+`spec.version` is a required field specifying the name of the [SolrVersion](/docs/v2024.4.27/guides/solr/concepts/solrversion) crd where the docker images are specified. Currently, when you install KubeDB, it creates the following `SolrVersion` crds,
+
+- `8.11.2`
+- `9.4.1`
+
+### spec.disableSecurity
+
+`spec.disableSecurity` is an optional field that decides whether Solr instance will be secured by auth or no.
+
+### spec.authSecret
+
+`spec.authSecret` is an optional field that points to a Secret used to hold credentials for `Solr` superuser. If not set, KubeDB operator creates a new Secret `{Solr-object-name}-admin-cred` for storing the password for `Solr` superuser.
+
+We can use this field in 3 mode.
+
+1. Using an external secret. In this case, You need to create an auth secret first with required fields, then specify the secret name when creating the Solr object using `spec.authSecret.name` & set `spec.authSecret.externallyManaged` to true.
+```yaml
+authSecret:
+ name:
+ externallyManaged: true
+```
+2. Specifying the secret name only. In this case, You need to specify the secret name when creating the Solr object using `spec.authSecret.name`. `externallyManaged` is by default false.
+```yaml
+authSecret:
+ name:
+```
+
+3. Let KubeDB do everything for you. In this case, no work for you.
+
+AuthSecret contains a `username` key and a `password` key which contains the `username` and `password` respectively for `Solr` superuser.
+
+Example:
+
+```bash
+$ kubectl create secret generic solr-cluster-admin0-cred -n demo \
+--from-literal=username=admin \
+--from-literal=password=6q8u_2jMOW-OOZXk
+secret "solr-cluster-admin-cred" created
+```
+
+```yaml
+apiVersion: v1
+data:
+ password: NnE4dV8yak1PVy1PT1pYaw==
+ username: amhvbi1kb2U=
+kind: Secret
+metadata:
+ name: solr-cluster-admin-cred
+ namespace: demo
+type: Opaque
+```
+
+Secrets provided by users are not managed by KubeDB, and therefore, won't be modified or garbage collected by the KubeDB operator (version 0.13.0 and higher).
+
+
+### spec.zookeeperRef
+
+Referenece of zookeeper cluster which will coordinate solr and save necessary credentials of solr cluster.
+
+### spec.zookeeperDigestSecret
+
+We have some zookeeper digest secret which will keep data in out zookeeper cluster safe. These secret do not guarantee security of zookeeper cluster. It just encodes solr data in the zookeeper cluster.
+
+### spec.storage
+
+If you set `spec.storageType:` to `Durable`, then `spec.storage` is a required field that specifies the StorageClass of PVCs dynamically allocated to store data for the database. This storage spec will be passed to the Petset created by KubeDB operator to run database pods. You can specify any StorageClass available in your cluster with appropriate resource requests.
+
+- `spec.storage.storageClassName` is the name of the StorageClass used to provision PVCs. PVCs don’t necessarily have to request a class. A PVC with its storageClassName set equal to "" is always interpreted to be requesting a PV with no class, so it can only be bound to PVs with no class (no annotation or one set equal to ""). A PVC with no storageClassName is not quite the same and is treated differently by the cluster depending on whether the DefaultStorageClass admission plugin is turned on.
+- `spec.storage.accessModes` uses the same conventions as Kubernetes PVCs when requesting storage with specific access modes.
+- `spec.storage.resources` can be used to request specific quantities of storage. This follows the same resource model used by PVCs.
+
+To learn how to configure `spec.storage`, please visit the links below:
+
+- https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims
+
+### spec.solrModules
+
+We have to enable certain modules to conduct the operations like backup and monitoring. Like we have to enable "prometheus-exporter" module to enable monitoring.
+
+### spec.monitor
+
+Solr managed by KubeDB can be monitored with builtin-Prometheus and Prometheus operator out-of-the-box.
+
+
+### spec.configSecret
+
+`spec.configSecret` is an optional field that allows users to provide custom configuration for Solr. This field accepts a [`VolumeSource`](https://github.com/kubernetes/api/blob/release-1.11/core/v1/types.go#L47). So you can use any Kubernetes supported volume source such as `configMap`, `secret`, `azureDisk` etc.
+
+### spec.podTemplate
+
+KubeDB allows providing a template for database pod through `spec.podTemplate`. KubeDB operator will pass the information provided in `spec.podTemplate` to the Petset created for Solr server.
+
+KubeDB accept following fields to set in `spec.podTemplate:`
+
+- metadata:
+ - annotations (pod's annotation)
+- controller:
+ - annotations (petset's annotation)
+- spec:
+ - resources
+ - initContainers
+ - containers
+ - imagePullSecrets
+ - nodeSelector
+ - affinity
+ - serviceAccountName
+ - schedulerName
+ - tolerations
+ - priorityClassName
+ - priority
+ - securityContext
+ - livenessProbe
+ - readinessProbe
+ - lifecycle
+
+You can check out the full list [here](https://github.com/kmodules/offshoot-api/blob/39bf8b2/api/v2/types.go#L44-L279).
+Uses of some field of `spec.podTemplate` is described below,
+
+#### spec.podTemplate.spec.imagePullSecret
+
+`KubeDB` provides the flexibility of deploying Solr server from a private Docker registry.
+#### spec.podTemplate.spec.nodeSelector
+
+`spec.podTemplate.spec.nodeSelector` is an optional field that specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). To learn more, see [here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) .
+
+#### spec.podTemplate.spec.serviceAccountName
+
+ `serviceAccountName` is an optional field supported by KubeDB Operator (version 0.13.0 and higher) that can be used to specify a custom service account to fine tune role based access control.
+
+ If this field is left empty, the KubeDB operator will create a service account name matching Solr crd name. Role and RoleBinding that provide necessary access permissions will also be generated automatically for this service account.
+
+ If a service account name is given, but there's no existing service account by that name, the KubeDB operator will create one, and Role and RoleBinding that provide necessary access permissions will also be generated for this service account.
+
+ If a service account name is given, and there's an existing service account by that name, the KubeDB operator will use that existing service account. Since this service account is not managed by KubeDB, users are responsible for providing necessary access permissions manually.
+
+#### spec.podTemplate.spec.resources
+
+`spec.podTemplate.spec.resources` is an optional field. This can be used to request compute resources required by the database pods. To learn more, visit [here](http://kubernetes.io/docs/user-guide/compute-resources/).
+
+### spec.serviceTemplates
+
+You can also provide a template for the services created by KubeDB operator for Solr server through `spec.serviceTemplates`. This will allow you to set the type and other properties of the services.
+
+KubeDB allows following fields to set in `spec.serviceTemplates`:
+- `alias` represents the identifier of the service. It has the following possible value:
+ - `primary` is used for the primary service identification.
+ - `standby` is used for the secondary service identification.
+ - `stats` is used for the exporter service identification.
+
+- metadata:
+ - annotations
+- spec:
+ - type
+ - ports
+ - clusterIP
+ - externalIPs
+ - loadBalancerIP
+ - loadBalancerSourceRanges
+ - externalTrafficPolicy
+ - healthCheckNodePort
+ - sessionAffinityConfig
+
+See [here](https://github.com/kmodules/offshoot-api/blob/kubernetes-1.16.3/api/v1/types.go#L163) to understand these fields in detail.
+
+### spec.terminationPolicy
+
+`terminationPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `Solr` crd or which resources KubeDB should keep or delete when you delete `Solr` crd. KubeDB provides following four termination policies:
+
+- DoNotTerminate
+- Halt
+- Delete (`Default`)
+- WipeOut
+
+When `terminationPolicy` is `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` feature. If admission webhook is enabled, `DoNotTerminate` prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`.
+
+Following table show what KubeDB does when you delete Solr crd for different termination policies,
+
+| Behavior | DoNotTerminate | Halt | Delete | WipeOut |
+|------------------------------------| :------------: |:--------:| :------: | :------: |
+| 1. Block Delete operation | ✓ | ✗ | ✗ | ✗ |
+| 2. Delete Petset | ✗ | ✓ | ✓ | ✓ |
+| 3. Delete Services | ✗ | ✓ | ✓ | ✓ |
+| 4. Delete PVCs | ✗ | ✗ | ✓ | ✓ |
+| 5. Delete Secrets | ✗ | ✗ | ✗ | ✓ |
+| 6. Delete Snapshots | ✗ | ✗ | ✗ | ✓ |
+| 7. Delete Snapshot data from bucket | ✗ | ✗ | ✗ | ✓ |
+If you don't specify `spec.terminationPolicy` KubeDB uses `Delete` termination policy by default.
+
+### spec.halted
+Indicates that the database is halted and all offshoot Kubernetes resources except PVCs are deleted.
+
+## spec.healthChecker
+It defines the attributes for the health checker.
+- `spec.healthChecker.periodSeconds` specifies how often to perform the health check.
+- `spec.healthChecker.timeoutSeconds` specifies the number of seconds after which the probe times out.
+- `spec.healthChecker.failureThreshold` specifies minimum consecutive failures for the healthChecker to be considered failed.
+
+Know details about KubeDB Health checking from this [blog post](https://appscode.com/blog/post/kubedb-health-checker/).
+
+## Next Steps
+
+- Learn how to use KubeDB to run a Solr server [here](/docs/v2024.4.27/guides/solr/README).
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.4.27/CONTRIBUTING).
diff --git a/content/docs/v2024.4.27/guides/solr/concepts/solrversion.md b/content/docs/v2024.4.27/guides/solr/concepts/solrversion.md
new file mode 100644
index 0000000000..9429b2b091
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/solr/concepts/solrversion.md
@@ -0,0 +1,107 @@
+---
+title: SolrVersion CRD
+menu:
+ docs_v2024.4.27:
+ identifier: sl-catalog-concepts
+ name: SolrVersion
+ parent: sl-concepts-solr
+ weight: 20
+menu_name: docs_v2024.4.27
+section_menu_id: guides
+info:
+ autoscaler: v0.30.0
+ cli: v0.45.0
+ dashboard: v0.21.0
+ installer: v2024.4.27
+ ops-manager: v0.32.0
+ provisioner: v0.45.0
+ schema-manager: v0.21.0
+ ui-server: v0.21.0
+ version: v2024.4.27
+ webhook-server: v0.21.0
+---
+
+> New to KubeDB? Please start [here](/docs/v2024.4.27/README).
+
+# SolrVersion
+
+## What is SolrVersion
+
+`SolrVersion` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration to specify the docker images to be used for [Solr](https://solr.apache.org/) database deployed with KubeDB in a Kubernetes native way.
+
+When you install KubeDB, a `SolrVersion` custom resource will be created automatically for every supported Solr versions. You have to specify the name of `SolrVersion` crd in `spec.version` field of [Solr](/docs/v2024.4.27/guides/solr/concepts/solr) crd. Then, KubeDB will use the docker images specified in the `SolrVersion` crd to create your expected database.
+
+Using a separate crd for specifying respective docker images, and pod security policy names allow us to modify the images, and policies independent of KubeDB operator. This will also allow the users to use a custom image for the database.
+
+## SolrVersion Specification
+
+As with all other Kubernetes objects, a SolrVersion needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section.
+
+```yaml
+apiVersion: catalog.kubedb.com/v1alpha1
+kind: SolrVersion
+metadata:
+ annotations:
+ meta.helm.sh/release-name: kubedb-catalog
+ meta.helm.sh/release-namespace: kubedb
+ creationTimestamp: "2024-05-06T08:55:33Z"
+ generation: 2
+ labels:
+ app.kubernetes.io/instance: kubedb-catalog
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/name: kubedb-catalog
+ app.kubernetes.io/version: v2024.4.27
+ helm.sh/chart: kubedb-catalog-v2024.4.27
+ name: 9.4.1
+ resourceVersion: "3229"
+ uid: 441f6f1e-943c-4a34-84ac-f2705da63fb1
+spec:
+ db:
+ image: ghcr.io/appscode-images/solr:9.4.1
+ initContainer:
+ image: ghcr.io/kubedb/solr-init:9.4.1
+ securityContext:
+ runAsUser: 8983
+ version: 9.4.1
+```
+
+### metadata.name
+
+`metadata.name` is a required field that specifies the name of the `SolrVersion` crd. You have to specify this name in `spec.version` field of [Solr](/docs/v2024.4.27/guides/solr/concepts/solr) crd.
+
+
+### spec.version
+
+`spec.version` is a required field that specifies the original version of Solr server that has been used to build the docker image specified in `spec.db.image` field.
+
+### spec.deprecated
+
+`spec.deprecated` is an optional field that specifies whether the docker images specified here is supported by the current KubeDB operator.
+
+The default value of this field is `false`. If `spec.deprecated` is set to `true`, KubeDB operator will skip processing this CRD object and will add a event to the CRD object specifying that the DB version is deprecated.
+
+### spec.db.image
+
+`spec.db.image` is a required field that specifies the docker image which will be used to create Petset(Appscode managed customized statefulset) by KubeDB operator to create expected Solr server.
+
+### spec.initContainer.image
+
+`spec.initContainer.image` is a required field that specifies the image for init container.
+
+### spec.securityContext
+
+DB specific security context which will be added in petset.
+
+```bash
+helm upgrade -i kubedb oci://ghcr.io/appscode-charts/kubedb \
+ --namespace kubedb --create-namespace \
+ --set additionalPodSecurityPolicies[0]=custom-db-policy \
+ --set-file global.license=/path/to/the/license.txt \
+ --set gloabal.featureGates.Solr=true --set gloabal.featureGates.ZooKeeper=true \
+ --wait --burst-limit=10000 --debug
+```
+
+## Next Steps
+
+- Learn about Solr crd [here](/docs/v2024.4.27/guides/solr/concepts/solr).
+- Deploy your first Solr server with KubeDB by following the guide [here](/docs/v2024.4.27/guides/solr/quickstart/overview/).
diff --git a/content/docs/v2024.4.27/guides/solr/quickstart/_index.md b/content/docs/v2024.4.27/guides/solr/quickstart/_index.md
new file mode 100644
index 0000000000..b57e5cb07e
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/solr/quickstart/_index.md
@@ -0,0 +1,22 @@
+---
+title: Solr
+menu:
+ docs_v2024.4.27:
+ identifier: sl-quickstart-solr
+ name: Quickstart
+ parent: sl-solr-guides
+ weight: 12
+menu_name: docs_v2024.4.27
+info:
+ autoscaler: v0.30.0
+ cli: v0.45.0
+ dashboard: v0.21.0
+ installer: v2024.4.27
+ ops-manager: v0.32.0
+ provisioner: v0.45.0
+ schema-manager: v0.21.0
+ ui-server: v0.21.0
+ version: v2024.4.27
+ webhook-server: v0.21.0
+---
+
diff --git a/content/docs/v2024.4.27/guides/solr/quickstart/overview/images/Lifecycle-of-a-solr-instance.png b/content/docs/v2024.4.27/guides/solr/quickstart/overview/images/Lifecycle-of-a-solr-instance.png
new file mode 100644
index 0000000000..a02e3f3502
Binary files /dev/null and b/content/docs/v2024.4.27/guides/solr/quickstart/overview/images/Lifecycle-of-a-solr-instance.png differ
diff --git a/content/docs/v2024.4.27/guides/solr/quickstart/overview/index.md b/content/docs/v2024.4.27/guides/solr/quickstart/overview/index.md
new file mode 100644
index 0000000000..3553901458
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/solr/quickstart/overview/index.md
@@ -0,0 +1,520 @@
+---
+title: Solr Quickstart
+menu:
+ docs_v2024.4.27:
+ identifier: sl-overview-solr
+ name: Overview
+ parent: sl-quickstart-solr
+ weight: 10
+menu_name: docs_v2024.4.27
+section_menu_id: guides
+info:
+ autoscaler: v0.30.0
+ cli: v0.45.0
+ dashboard: v0.21.0
+ installer: v2024.4.27
+ ops-manager: v0.32.0
+ provisioner: v0.45.0
+ schema-manager: v0.21.0
+ ui-server: v0.21.0
+ version: v2024.4.27
+ webhook-server: v0.21.0
+---
+
+> New to KubeDB? Please start [here](/docs/v2024.4.27/README).
+
+# Solr QuickStart
+
+This tutorial will show you how to use KubeDB to run a Solr database.
+
+
+
+
+
+## Before You Begin
+
+At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+Now, install the KubeDB operator in your cluster following the steps [here](/docs/v2024.4.27/setup/install/_index). and make sure install with helm command including `--set global.featureGates.Solr=true --set global.featureGates.ZooKeeper=true` to ensure Solr and ZooKeeper crd.
+
+To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial.
+
+```bash
+$ kubectl create namespace demo
+namespace/demo created
+
+$ kubectl get namespace
+NAME STATUS AGE
+demo Active 9s
+```
+
+> Note: YAML files used in this tutorial are stored in [docs/guides/solr/quickstart/overview/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/solr/quickstart/overview/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs).
+
+> We have designed this tutorial to demonstrate a production setup of KubeDB managed Solr. If you just want to try out KubeDB, you can bypass some safety features following the tips [here](/docs/v2024.4.27/guides/solr/quickstart/overview/#tips-for-testing).
+
+## Find Available StorageClass
+
+We will have to provide `StorageClass` in Solr CRD specification. Check available `StorageClass` in your cluster using the following command,
+
+```bash
+$ kubectl get storageclass
+NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
+standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 14h
+```
+
+Here, we have `standard` StorageClass in our cluster from [Local Path Provisioner](https://github.com/rancher/local-path-provisioner).
+
+## Find Available SolrVersion
+
+When you install the KubeDB operator, it registers a CRD named `SolrVersions`. The installation process comes with a set of tested SolrVersion objects. Let's check available SolrVersions by,
+
+```bash
+$ kubectl get solrversion
+NAME VERSION DB_IMAGE DEPRECATED AGE
+8.11.2 8.11.2 ghcr.io/appscode-images/solr:8.11.2 9d
+9.4.1 9.4.1 ghcr.io/appscode-images/solr:9.4.1 9d
+```
+
+Notice the `DEPRECATED` column. Here, `true` means that this SolrVersion is deprecated for the current KubeDB version. KubeDB will not work for deprecated SolrVersion.
+
+In this tutorial, we will use `9.4.1` SolrVersion CR to create a Solr cluster.
+
+> Note: An image with a higher modification tag will have more features and fixes than an image with a lower modification tag. Hence, it is recommended to use SolrVersion CRD with the highest modification tag to take advantage of the latest features. For example, use `9.4.1` over `8.11.2`.
+
+## Create a Solr Cluster
+
+The KubeDB operator implements a Solr CRD to define the specification of a Solr database.
+
+The KubeDB Solr runs in `solrcloud` mode. Hence, it needs a external zookeeper to distribute replicas among pods and save configurations.
+
+We will use KubeDB ZooKeeper for this purpose.
+
+The ZooKeeper instance used for this tutorial:
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: ZooKeeper
+metadata:
+ name: zoo-com
+ namespace: demo
+spec:
+ version: 3.8.3
+ replicas: 3
+ terminationPolicy: Delete
+ adminServerPort: 8080
+ storage:
+ resources:
+ requests:
+ storage: "100Mi"
+ storageClassName: standard
+ accessModes:
+ - ReadWriteOnce
+```
+
+We have to apply zookeeper first and wait till atleast pods are running to make sure that a cluster has been formed.
+
+Here,
+
+- `spec.version` - is the name of the ZooKeeperVersion CR. Here, a ZooKeeper of version `3.8.3` will be created.
+- `spec.replicas` - specifies the number of ZooKeeper nodes.
+- `spec.storageType` - specifies the type of storage that will be used for ZooKeeper database. It can be `Durable` or `Ephemeral`. The default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create the ZooKeeper database using `EmptyDir` volume. In this case, you don't have to specify `spec.storage` field. This is useful for testing purposes.
+- `spec.storage` specifies the StorageClass of PVC dynamically allocated to store data for this database. This storage spec will be passed to the Petsets created by the KubeDB operator to run database pods. You can specify any StorageClass available in your cluster with appropriate resource requests. If you don't specify `spec.storageType: Ephemeral`, then this field is required.
+- `spec.terminationPolicy` specifies what KubeDB should do when a user try to delete ZooKeeper CR. Termination policy `Delete` will delete the database pods, secret and PVC when the ZooKeeper CR is deleted. Checkout the [link](/docs/v2024.4.27/guides/zookeeper/concepts/zookeeper.md/#specterminationpolicy) for details.
+
+> Note: `spec.storage` section is used to create PVC for database pod. It will create PVC with storage size specified in the `storage.resources.requests` field. Don't specify `limits` here. PVC does not get resized automatically.
+
+Let's create the ZooKeeper CR that is shown above:
+
+```bash
+$ $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/solr/quickstart/overview/yamls/zookeeper/zookeeper.yaml
+ZooKeeper.kubedb.com/es-quickstart created
+```
+
+The ZooKeeper's `STATUS` will go from `Provisioning` to `Ready` state within few minutes. Once the `STATUS` is `Ready`, you are ready to use the database.
+
+```bash
+$ kubectl get ZooKeeper -n demo -w
+NAME TYPE VERSION STATUS AGE
+zoo-com kubedb.com/v1alpha2 3.7.2 Ready 13m
+```
+
+Then we can deploy solr in our cluster.
+
+The Solr instance used for this tutorial:
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: Solr
+metadata:
+ name: solr-combined
+ namespace: demo
+spec:
+ version: 9.4.1
+ terminationPolicy: Delete
+ replicas: 2
+ zookeeperRef:
+ name: zk-com
+ namespace: demo
+ storage:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 2Gi
+ storageClassName: standard
+```
+
+Here,
+
+- `spec.version` - is the name of the SolrVersion CR. Here, a Solr of version `9.4.1` will be created.
+- `spec.replicas` - specifies the number of Solr nodes.
+- `spec.storageType` - specifies the type of storage that will be used for Solr database. It can be `Durable` or `Ephemeral`. The default value of this field is `Durable`. If `Ephemeral` is used then KubeDB will create the Solr database using `EmptyDir` volume. In this case, you don't have to specify `spec.storage` field. This is useful for testing purposes.
+- `spec.storage` specifies the StorageClass of PVC dynamically allocated to store data for this database. This storage spec will be passed to the Petset created by the KubeDB operator to run database pods. You can specify any StorageClass available in your cluster with appropriate resource requests. If you don't specify `spec.storageType: Ephemeral`, then this field is required.
+- `spec.terminationPolicy` specifies what KubeDB should do when a user try to delete Solr CR. Termination policy `Delete` will delete the database pods, secret and PVC when the Solr CR is deleted. Checkout the [link](/docs/v2024.4.27/guides/solr/concepts/solr.md/#specterminationpolicy) for details.
+
+> Note: `spec.storage` section is used to create PVC for database pod. It will create PVC with storage size specified in the `storage.resources.requests` field. Don't specify `limits` here. PVC does not get resized automatically.
+
+Let's create the Solr CR that is shown above:
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/solr/quickstart/overview/yamls/solr/solr.yaml
+solr.kubedb.com/solr-combined created
+```
+
+The Solr's `STATUS` will go from `Provisioning` to `Ready` state within few minutes. Once the `STATUS` is `Ready`, you are ready to use the database.
+
+```bash
+$ kubectl get Solr -n demo -w
+NAME TYPE VERSION STATUS AGE
+solr-combined kubedb.com/v1alpha2 9.4.1 Ready 17m
+```
+
+
+Describe the Solr object to observe the progress if something goes wrong or the status is not changing for a long period of time:
+
+```bash
+$ Name: solr-combined
+Namespace: demo
+Labels:
+Annotations:
+API Version: kubedb.com/v1alpha2
+Kind: Solr
+Metadata:
+ Creation Timestamp: 2024-05-03T10:44:35Z
+ Finalizers:
+ kubedb.com
+ Generation: 1
+ Resource Version: 778471
+ UID: 8c36f5ce-0f93-4c8d-874d-662b9c404126
+Spec:
+ Auth Config Secret:
+ Name: solr-combined-auth-config
+ Auth Secret:
+ Name: solr-combined-admin-cred
+ Health Checker:
+ Failure Threshold: 3
+ Period Seconds: 20
+ Timeout Seconds: 10
+ Pod Placement Policy:
+ Name: default
+ Pod Template:
+ Controller:
+ Metadata:
+ Spec:
+ Containers:
+ Name: solr
+ Resources:
+ Limits:
+ Memory: 2Gi
+ Requests:
+ Cpu: 900m
+ Memory: 2Gi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 8983
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Init Containers:
+ Name: init-solr
+ Resources:
+ Limits:
+ Memory: 512Mi
+ Requests:
+ Cpu: 200m
+ Memory: 512Mi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Non Root: true
+ Run As User: 8983
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Security Context:
+ Fs Group: 8983
+ Replicas: 3
+ Storage:
+ Access Modes:
+ ReadWriteOnce
+ Resources:
+ Requests:
+ Storage: 1Gi
+ Storage Class Name: standard
+ Storage Type: Durable
+ Termination Policy: Delete
+ Version: 9.4.1
+ Zookeeper Digest Readonly Secret:
+ Name: solr-combined-zk-digest-readonly
+ Zookeeper Digest Secret:
+ Name: solr-combined-zk-digest
+ Zookeeper Ref:
+ Name: zoo-com
+ Namespace: demo
+Status:
+ Conditions:
+ Last Transition Time: 2024-05-03T10:44:35Z
+ Message: The KubeDB operator has started the provisioning of Solr: demo/solr-combined
+ Observed Generation: 1
+ Reason: DatabaseProvisioningStartedSuccessfully
+ Status: True
+ Type: ProvisioningStarted
+ Last Transition Time: 2024-05-03T10:45:01Z
+ Message: All desired replicas are ready.
+ Observed Generation: 1
+ Reason: AllReplicasReady
+ Status: True
+ Type: ReplicaReady
+ Last Transition Time: 2024-05-03T10:45:26Z
+ Message: The Solr: demo/solr-combined is accepting connection
+ Observed Generation: 1
+ Reason: DatabaseAcceptingConnectionRequest
+ Status: True
+ Type: AcceptingConnection
+ Last Transition Time: 2024-05-03T10:45:28Z
+ Message: The Solr: demo/solr-combined is accepting write request.
+ Observed Generation: 1
+ Reason: DatabaseWriteAccessCheckSucceeded
+ Status: True
+ Type: DatabaseWriteAccess
+ Last Transition Time: 2024-05-03T10:45:28Z
+ Message: The Solr: demo/solr-combined is not accepting connection.
+ Observed Generation: 1
+ Reason: AllReplicasReady,AcceptingConnection,ReadinessCheckSucceeded,DatabaseWriteAccessCheckSucceeded
+ Status: True
+ Type: Ready
+ Last Transition Time: 2024-05-03T10:45:30Z
+ Message: The Solr: demo/solr-combined is successfully provisioned.
+ Observed Generation: 1
+ Reason: DatabaseSuccessfullyProvisioned
+ Status: True
+ Type: Provisioned
+ Last Transition Time: 2024-05-03T10:45:46Z
+ Message: The Solr: demo/solr-combined is accepting read request.
+ Observed Generation: 1
+ Reason: DatabaseReadAccessCheckSucceeded
+ Status: True
+ Type: DatabaseReadAccess
+ Phase: Ready
+Events:
+```
+
+### KubeDB Operator Generated Resources
+
+On deployment of a Solr CR, the operator creates the following resources:
+
+```bash
+$ kubectl get all,secret,pvc -n demo -l 'app.kubernetes.io/instance=solr-combined'
+NAME READY STATUS RESTARTS AGE
+pod/solr-combined-0 1/1 Running 0 3m40s
+pod/solr-combined-1 1/1 Running 0 3m33s
+pod/solr-combined-2 1/1 Running 0 3m26s
+
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+service/solr-combined ClusterIP 10.96.99.183 8983/TCP 3m44s
+service/solr-combined-pods ClusterIP None 8983/TCP 3m44s
+
+NAME TYPE VERSION AGE
+appbinding.appcatalog.appscode.com/solr-combined kubedb.com/solr 9.4.1 3m44s
+
+NAME AGE
+petset.apps.k8s.appscode.com/solr-combined 5m18s
+
+
+NAME TYPE DATA AGE
+secret/solr-combined-admin-cred kubernetes.io/basic-auth 2 9d
+secret/solr-combined-auth-config Opaque 1 9d
+secret/solr-combined-config Opaque 1 3m44s
+secret/solr-combined-zk-digest kubernetes.io/basic-auth 2 9d
+secret/solr-combined-zk-digest-readonly kubernetes.io/basic-auth 2 9d
+
+NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
+persistentvolumeclaim/solr-combined-data-solr-combined-0 Bound pvc-6dc5573b-b59f-4ad7-beed-7438400ff500 1Gi RWO standard 3m40s
+persistentvolumeclaim/solr-combined-data-solr-combined-1 Bound pvc-1649cba5-b5e1-421b-aa73-ab6a4be0d637 1Gi RWO standard 3m33s
+persistentvolumeclaim/solr-combined-data-solr-combined-2 Bound pvc-dcb8c9e2-e64b-4a53-8b46-5c30301bb905 1Gi RWO standard 3m26s
+```
+
+- `PetSet` - a PetSet(Appscode manages customized statefulset) named after the Solr instance. In topology mode, the operator creates 3 PetSets with name `{Solr-Name}-{Sufix}`.
+- `Services` - 2 services are generated for each Solr database.
+ - `{Solr-Name}` - the client service which is used to connect to the database. It points to the `overseer` nodes.
+ - `{Solr-Name}-pods` - the node discovery service which is used by the Solr nodes to communicate each other. It is a headless service.
+- `AppBinding` - an [AppBinding](/docs/v2024.4.27/guides/solr/concepts/appbinding) which hold to connect information for the database. It is also named after the solr instance.
+- `Secrets` - 3 types of secrets are generated for each Solr database.
+ - `{Solr-Name}-admin-cred` - the auth secrets which hold the `username` and `password` for the solr users. The auth secret `solr-combined-admin-cred` holds the `username` and `password` for `admin` user which lets administrative access.
+ - `{Solr-Name}-config` - the default configuration secret created by the operator.
+ - `{Solr-Name}-auth-config` - the configuration secret of admin user information created by the operator.
+ - `{Solr-Name}-zk-digest` - the auth secret which contains the `username` and `password` for zookeeper digest secret which is able to access zookeeper data.
+ - `{Solr-Name}-zk-digest-readonly` - the auth secret which contains the `username` and `password` for zookeeper readonly digest secret which is able to read zookeeper data.
+
+
+## Connect with Solr Database
+
+We will use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to connect with our Solr database. Then we will use `curl` to send `HTTP` requests to check cluster health to verify that our Solr database is working well.
+
+Let's port-forward the port `8983` to local machine:
+
+```bash
+$ kubectl port-forward -n demo svc/solr-combined 8983
+Forwarding from 127.0.0.1:8983 -> 8983
+Forwarding from [::1]:8983 -> 8983
+```
+
+Now, our Solr cluster is accessible at `localhost:8983`.
+
+**Connection information:**
+
+- Address: `localhost:8983`
+- Username:
+
+ ```bash
+ $ kubectl get secret -n demo solr-combined-admin-cred -o jsonpath='{.data.username}' | base64 -d
+ admin
+ ```
+
+- Password:
+
+ ```bash
+ $ kubectl get secret -n demo solr-combined-admin-cred -o jsonpath='{.data.password}' | base64 -d
+ Xy3ZjyU)~(9IO8_n
+ ```
+
+Now let's check the health of our Solr database.
+
+```bash
+$ curl -XGET -k -u 'admin:Xy3ZjyU)~(9IO8_n' "http://localhost:8983/solr/admin/collections?action=CLUSTERSTATUS"
+{
+ "responseHeader":{
+ "status":0,
+ "QTime":1
+ },
+ "cluster":{
+ "collections":{
+ "kubedb-collection":{
+ "pullReplicas":"0",
+ "configName":"kubedb-collection.AUTOCREATED",
+ "replicationFactor":1,
+ "router":{
+ "name":"compositeId"
+ },
+ "nrtReplicas":1,
+ "tlogReplicas":"0",
+ "shards":{
+ "shard1":{
+ "range":"80000000-7fffffff",
+ "state":"active",
+ "replicas":{
+ "core_node2":{
+ "core":"kubedb-collection_shard1_replica_n1",
+ "node_name":"solr-combined-2.solr-combined-pods.demo:8983_solr",
+ "type":"NRT",
+ "state":"active",
+ "leader":"true",
+ "force_set_state":"false",
+ "base_url":"http://solr-combined-2.solr-combined-pods.demo:8983/solr"
+ }
+ },
+ "health":"GREEN"
+ }
+ },
+ "health":"GREEN",
+ "znodeVersion":4
+ }
+ },
+ "live_nodes":["solr-combined-2.solr-combined-pods.demo:8983_solr","solr-combined-1.solr-combined-pods.demo:8983_solr","solr-combined-0.solr-combined-pods.demo:8983_solr"]
+ }
+}
+```
+
+From the health information above, we can see that health of our collections in Solr cluster's status is `green` which means the cluster is healthy.
+
+## Halt Solr
+
+KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` termination policy. If admission webhook is enabled, it prevents the user from deleting the database as long as the `spec.terminationPolicy` is set `DoNotTerminate`.
+
+To halt the database, we have to set `spec.terminationPolicy:` to `Halt` by updating it,
+
+```bash
+$ kubectl patch -n demo solr solr-combined -p '{"spec":{"terminationPolicy":"Halt"}}' --type="merge"
+solr.kubedb.com/solr-combined patched
+```
+
+Now, if you delete the Solr object, the KubeDB operator will delete every resource created for this Solr CR, but leaves the auth secrets, and PVCs.
+
+```bash
+$ kubectl delete solr -n demo solr-combined
+solr.kubedb.com "solr-combined" deleted
+```
+
+Check resources:
+
+```bash
+$ kubectl get all,petset,secret,pvc -n demo -l 'app.kubernetes.io/instance=solr-combined'
+NAME TYPE DATA AGE
+secret/solr-combined-admin-cred kubernetes.io/basic-auth 2 9d
+secret/solr-combined-auth-config Opaque 1 9d
+secret/solr-combined-zk-digest kubernetes.io/basic-auth 2 9d
+secret/solr-combined-zk-digest-readonly kubernetes.io/basic-auth 2 9d
+
+NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
+persistentvolumeclaim/solr-combined-data-solr-combined-0 Bound pvc-6dc5573b-b59f-4ad7-beed-7438400ff500 1Gi RWO standard 24m
+persistentvolumeclaim/solr-combined-data-solr-combined-1 Bound pvc-1649cba5-b5e1-421b-aa73-ab6a4be0d637 1Gi RWO standard 24m
+persistentvolumeclaim/solr-combined-data-solr-combined-2 Bound pvc-dcb8c9e2-e64b-4a53-8b46-5c30301bb905 1Gi RWO standard 23m
+
+```
+
+## Resume Solr
+
+Say, the Solr CR was deleted with `spec.terminationPolicy` to `Halt` and you want to re-create the Solr cluster using the existing auth secrets and the PVCs.
+
+You can do it by simpily re-deploying the original Solr object:
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/solr/quickstart/overview/yamls/solr/solr.yaml
+solr.kubedb.com/solr-combined created
+```
+
+## Cleaning up
+
+To cleanup the Kubernetes resources created by this tutorial, run:
+
+```bash
+$ kubectl patch -n demo solr solr-combined -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge"
+solr.kubedb.com/solr-combined patched
+
+$ kubectl delete -n demo sl/solr-combined
+solr.kubedb.com "solr-combined" deleted
+
+$ kubectl delete namespace demo
+namespace "demo" deleted
+```
+
+## Tips for Testing
+
+If you are just testing some basic functionalities, you might want to avoid additional hassles due to some safety features that are great for the production environment. You can follow these tips to avoid them.
+
+1. **Use `storageType: Ephemeral`**. Databases are precious. You might not want to lose your data in your production environment if the database pod fails. So, we recommend to use `spec.storageType: Durable` and provide storage spec in `spec.storage` section. For testing purposes, you can just use `spec.storageType: Ephemeral`. KubeDB will use [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) for storage. You will not require to provide `spec.storage` section.
+2. **Use `terminationPolicy: WipeOut`**. It is nice to be able to resume the database from the previous one. So, we preserve all your `PVCs` and auth `Secrets`. If you don't want to resume the database, you can just use `spec.terminationPolicy: WipeOut`. It will clean up every resouce that was created with the Solr CR. Checkout the [link](/docs/v2024.4.27/guides/solr/concepts/solr.md/#specterminationpolicy) for details.
diff --git a/content/docs/v2024.4.27/guides/solr/quickstart/overview/yamls/solr/solr.yaml b/content/docs/v2024.4.27/guides/solr/quickstart/overview/yamls/solr/solr.yaml
new file mode 100644
index 0000000000..3b07dcb514
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/solr/quickstart/overview/yamls/solr/solr.yaml
@@ -0,0 +1,19 @@
+apiVersion: kubedb.com/v1alpha2
+kind: Solr
+metadata:
+ name: solr-combined
+ namespace: demo
+spec:
+ version: 9.4.1
+ terminationPolicy: Halt
+ replicas: 2
+ zookeeperRef:
+ name: zk-com
+ namespace: demo
+ storage:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 2Gi
+ storageClassName: standard
\ No newline at end of file
diff --git a/content/docs/v2024.4.27/guides/solr/quickstart/overview/yamls/zookeeper/zookeeper.yaml b/content/docs/v2024.4.27/guides/solr/quickstart/overview/yamls/zookeeper/zookeeper.yaml
new file mode 100644
index 0000000000..ba9551add2
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/solr/quickstart/overview/yamls/zookeeper/zookeeper.yaml
@@ -0,0 +1,17 @@
+apiVersion: kubedb.com/v1alpha2
+kind: ZooKeeper
+metadata:
+ name: zoo-com
+ namespace: demo
+spec:
+ version: 3.8.3
+ adminServerPort: 8080
+ replicas: 3
+ terminationPolicy: Halt
+ storage:
+ resources:
+ requests:
+ storage: "100Mi"
+ storageClassName: standard
+ accessModes:
+ - ReadWriteOnce
\ No newline at end of file
diff --git a/content/docs/v2024.4.27/guides/zookeeper/README.md b/content/docs/v2024.4.27/guides/zookeeper/README.md
new file mode 100644
index 0000000000..d561ceb7b3
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/zookeeper/README.md
@@ -0,0 +1,59 @@
+---
+title: ZooKeeper
+menu:
+ docs_v2024.4.27:
+ identifier: zk-readme-zookeeper
+ name: ZooKeeper
+ parent: zk-zookeeper-guides
+ weight: 10
+menu_name: docs_v2024.4.27
+section_menu_id: guides
+url: /docs/v2024.4.27/guides/zookeeper/
+aliases:
+- /docs/v2024.4.27/guides/zookeeper/README/
+info:
+ autoscaler: v0.30.0
+ cli: v0.45.0
+ dashboard: v0.21.0
+ installer: v2024.4.27
+ ops-manager: v0.32.0
+ provisioner: v0.45.0
+ schema-manager: v0.21.0
+ ui-server: v0.21.0
+ version: v2024.4.27
+ webhook-server: v0.21.0
+---
+
+> New to KubeDB? Please start [here](/docs/v2024.4.27/README).
+
+## Supported ZooKeeper Features
+| Features | Availability |
+|---------------------------------------------------------------------------|:------------:|
+| Ensemble | ✓ |
+| Standalone | ✓ |
+| Authentication & Autorization | ✓ |
+| Custom Configuration | ✓ |
+| Grafana Dashboards | ✓ |
+| Externally manageable Auth Secret | ✓ |
+| Reconfigurable Health Checker | ✓ |
+| Backup/Recovery: Instant, Scheduled ([KubeStash](https://kubestash.com/)) | ✓ |
+| Persistent Volume | ✓ |
+| Initializing from Snapshot ( [Stash](https://stash.run/) ) | ✓ |
+| Builtin Prometheus Discovery | ✓ |
+| Using Prometheus operator | ✓ |
+
+## Life Cycle of a ZooKeeper Object
+
+
+
+
+
+## User Guide
+
+- [Quickstart ZooKeeper](/docs/v2024.4.27/guides/zookeeper/quickstart/quickstart) with KubeDB Operator.
+- Detail Concept of [ZooKeeper Object](/docs/v2024.4.27/guides/zookeeper/concepts/zookeeper).
+
+
+## Next Steps
+
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.4.27/CONTRIBUTING).
\ No newline at end of file
diff --git a/content/docs/v2024.4.27/guides/zookeeper/_index.md b/content/docs/v2024.4.27/guides/zookeeper/_index.md
new file mode 100644
index 0000000000..f42e7e7b00
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/zookeeper/_index.md
@@ -0,0 +1,22 @@
+---
+title: ZooKeeper
+menu:
+ docs_v2024.4.27:
+ identifier: zk-zookeeper-guides
+ name: ZooKeeper
+ parent: guides
+ weight: 12
+menu_name: docs_v2024.4.27
+info:
+ autoscaler: v0.30.0
+ cli: v0.45.0
+ dashboard: v0.21.0
+ installer: v2024.4.27
+ ops-manager: v0.32.0
+ provisioner: v0.45.0
+ schema-manager: v0.21.0
+ ui-server: v0.21.0
+ version: v2024.4.27
+ webhook-server: v0.21.0
+---
+
diff --git a/content/docs/v2024.4.27/guides/zookeeper/concepts/_index.md b/content/docs/v2024.4.27/guides/zookeeper/concepts/_index.md
new file mode 100644
index 0000000000..3e57f2cb7f
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/zookeeper/concepts/_index.md
@@ -0,0 +1,22 @@
+---
+title: ZooKeeper Concepts
+menu:
+ docs_v2024.4.27:
+ identifier: zk-concepts-zookeeper
+ name: Concepts
+ parent: zk-zookeeper-guides
+ weight: 20
+menu_name: docs_v2024.4.27
+info:
+ autoscaler: v0.30.0
+ cli: v0.45.0
+ dashboard: v0.21.0
+ installer: v2024.4.27
+ ops-manager: v0.32.0
+ provisioner: v0.45.0
+ schema-manager: v0.21.0
+ ui-server: v0.21.0
+ version: v2024.4.27
+ webhook-server: v0.21.0
+---
+
diff --git a/content/docs/v2024.4.27/guides/zookeeper/concepts/appbinding.md b/content/docs/v2024.4.27/guides/zookeeper/concepts/appbinding.md
new file mode 100644
index 0000000000..89f2cd0bf9
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/zookeeper/concepts/appbinding.md
@@ -0,0 +1,151 @@
+---
+title: AppBinding CRD
+menu:
+ docs_v2024.4.27:
+ identifier: zk-appbinding-concepts
+ name: AppBinding
+ parent: zk-concepts-zookeeper
+ weight: 30
+menu_name: docs_v2024.4.27
+section_menu_id: guides
+info:
+ autoscaler: v0.30.0
+ cli: v0.45.0
+ dashboard: v0.21.0
+ installer: v2024.4.27
+ ops-manager: v0.32.0
+ provisioner: v0.45.0
+ schema-manager: v0.21.0
+ ui-server: v0.21.0
+ version: v2024.4.27
+ webhook-server: v0.21.0
+---
+
+> New to KubeDB? Please start [here](/docs/v2024.4.27/README).
+
+# AppBinding
+
+## What is AppBinding
+
+An `AppBinding` is a Kubernetes `CustomResourceDefinition`(CRD) which points to an application using either its URL (usually for a non-Kubernetes resident service instance) or a Kubernetes service object (if self-hosted in a Kubernetes cluster), some optional parameters and a credential secret. To learn more about AppBinding and the problems it solves, please read this blog post: [The case for AppBinding](https://appscode.com/blog/post/the-case-for-appbinding).
+
+If you deploy a database using [KubeDB](https://kubedb.com/docs/0.11.0/concepts/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database.
+
+KubeDB uses [Stash](https://appscode.com/products/stash/) to perform backup/recovery of databases. Stash needs to know how to connect with a target database and the credentials necessary to access it. This is done via an `AppBinding`.
+
+## AppBinding CRD Specification
+
+Like any official Kubernetes resource, an `AppBinding` has `TypeMeta`, `ObjectMeta` and `Spec` sections. However, unlike other Kubernetes resources, it does not have a `Status` section.
+
+An `AppBinding` object created by `KubeDB` for ZooKeeper database is shown below,
+
+```yaml
+apiVersion: appcatalog.appscode.com/v1alpha1
+kind: AppBinding
+metadata:
+ annotations:
+ kubectl.kubernetes.io/last-applied-configuration: |
+ {"apiVersion":"kubedb.com/v1alpha2","kind":"ZooKeeper","metadata":{"annotations":{},"name":"zk-cluster","namespace":"demo"},"spec":{"podTemplate":{"spec":{"containers":[{"name":"zookeeper","resources":{"requests":{"cpu":"720m","memory":"846Mi"}}}]}},"replicas":3,"serviceTemplates":[{"alias":"primary","spec":{"type":"LoadBalancer"}}],"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"terminationPolicy":"WipeOut","version":"3.9.1"}}
+ creationTimestamp: "2024-05-02T10:02:45Z"
+ generation: 2
+ labels:
+ app.kubernetes.io/component: database
+ app.kubernetes.io/instance: zk-cluster
+ app.kubernetes.io/managed-by: kubedb.com
+ app.kubernetes.io/name: zookeepers.kubedb.com
+ name: zk-cluster
+ namespace: demo
+ ownerReferences:
+ - apiVersion: kubedb.com/v1alpha2
+ blockOwnerDeletion: true
+ controller: true
+ kind: ZooKeeper
+ name: zk-cluster
+ uid: 20e00408-abf1-470b-a049-bdf272b3e994
+ resourceVersion: "3548"
+ uid: 8fd15549-ab9c-4523-b85d-77275f620bb5
+spec:
+ appRef:
+ apiGroup: kubedb.com
+ kind: ZooKeeper
+ name: zk-cluster
+ namespace: demo
+ clientConfig:
+ service:
+ name: zk-cluster
+ port: 2181
+ scheme: http
+ secret:
+ name: zk-cluster-auth
+ type: kubedb.com/zookeeper
+ version: 3.9.1
+```
+
+Here, we are going to describe the sections of an `AppBinding` crd.
+
+### AppBinding `Spec`
+
+An `AppBinding` object has the following fields in the `spec` section:
+
+#### spec.type
+
+`spec.type` is an optional field that indicates the type of the app that this `AppBinding` is pointing to. Stash uses this field to resolve the values of `TARGET_APP_TYPE`, `TARGET_APP_GROUP` and `TARGET_APP_RESOURCE` variables of [BackupBlueprint](https://appscode.com/products/stash/latest/concepts/crds/backupblueprint/) object.
+
+This field follows the following format: `/`. The above AppBinding is pointing to a `zookeeper` resource under `kubedb.com` group.
+
+Here, the variables are parsed as follows:
+
+| Variable | Usage |
+| --------------------- |--------------------------------------------------------------------------------------------------------------------------------|
+| `TARGET_APP_GROUP` | Represents the application group where the respective app belongs (i.e: `kubedb.com`). |
+| `TARGET_APP_RESOURCE` | Represents the resource under that application group that this appbinding represents (i.e: `zookeeper`). |
+| `TARGET_APP_TYPE` | Represents the complete type of the application. It's simply `TARGET_APP_GROUP/TARGET_APP_RESOURCE` (i.e: `kubedb.com/zookeeper`). |
+
+#### spec.secret
+
+`spec.secret` specifies the name of the secret which contains the credentials that are required to access the database. This secret must be in the same namespace as the `AppBinding`.
+
+This secret must contain the following keys:
+
+| Key | Usage |
+| ---------- | ---------------------------------------------- |
+| `username` | Username of the target database. |
+| `password` | Password for the user specified by `username`. |
+
+
+
+#### spec.appRef
+appRef refers to the underlying application. It has 4 fields named `apiGroup`, `kind`, `name` & `namespace`.
+
+#### spec.clientConfig
+
+`spec.clientConfig` defines how to communicate with the target database. You can use either a URL or a Kubernetes service to connect with the database. You don't have to specify both of them.
+
+You can configure following fields in `spec.clientConfig` section:
+
+- **spec.clientConfig.url**
+
+ `spec.clientConfig.url` gives the location of the database, in standard URL form (i.e. `[scheme://]host:port/[path]`). This is particularly useful when the target database is running outside of the Kubernetes cluster. If your database is running inside the cluster, use `spec.clientConfig.service` section instead.
+
+> Note that, attempting to use a user or basic auth (e.g. `user:password@host:port`) is not allowed. Stash will insert them automatically from the respective secret. Fragments ("#...") and query parameters ("?...") are not allowed either.
+
+- **spec.clientConfig.service**
+
+ If you are running the database inside the Kubernetes cluster, you can use Kubernetes service to connect with the database. You have to specify the following fields in `spec.clientConfig.service` section if you manually create an `AppBinding` object.
+
+ - **name :** `name` indicates the name of the service that connects with the target database.
+ - **scheme :** `scheme` specifies the scheme (i.e. http, https) to use to connect with the database.
+ - **port :** `port` specifies the port where the target database is running.
+
+- **spec.clientConfig.insecureSkipTLSVerify**
+
+ `spec.clientConfig.insecureSkipTLSVerify` is used to disable TLS certificate verification while connecting with the database. We strongly discourage to disable TLS verification during backup. You should provide the respective CA bundle through `spec.clientConfig.caBundle` field instead.
+
+- **spec.clientConfig.caBundle**
+
+ `spec.clientConfig.caBundle` is a PEM encoded CA bundle which will be used to validate the serving certificate of the database.
+
+## Next Steps
+
+- Learn how to use KubeDB to manage various databases [here](/docs/v2024.4.27/guides/README).
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.4.27/CONTRIBUTING).
diff --git a/content/docs/v2024.4.27/guides/zookeeper/concepts/catalog.md b/content/docs/v2024.4.27/guides/zookeeper/concepts/catalog.md
new file mode 100644
index 0000000000..fbec0232a8
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/zookeeper/concepts/catalog.md
@@ -0,0 +1,122 @@
+---
+title: ZooKeeperVersion CRD
+menu:
+ docs_v2024.4.27:
+ identifier: zk-catalog-concepts
+ name: ZooKeeperVersion
+ parent: zk-concepts-zookeeper
+ weight: 20
+menu_name: docs_v2024.4.27
+section_menu_id: guides
+info:
+ autoscaler: v0.30.0
+ cli: v0.45.0
+ dashboard: v0.21.0
+ installer: v2024.4.27
+ ops-manager: v0.32.0
+ provisioner: v0.45.0
+ schema-manager: v0.21.0
+ ui-server: v0.21.0
+ version: v2024.4.27
+ webhook-server: v0.21.0
+---
+
+> New to KubeDB? Please start [here](/docs/v2024.4.27/README).
+
+# ZooKeeperVersion
+
+## What is ZooKeeperVersion
+
+`ZooKeeperVersion` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration to specify the docker images to be used for [ZooKeeper](https://zookeeper.apache.org/) database deployed with KubeDB in a Kubernetes native way.
+
+When you install KubeDB, a `ZooKeeperVersion` custom resource will be created automatically for every supported ZooKeeper versions. You have to specify the name of `ZooKeeperVersion` crd in `spec.version` field of [ZooKeeper](/docs/v2024.4.27/guides/zookeeper/concepts/zookeeper) crd. Then, KubeDB will use the docker images specified in the `ZooKeeperVersion` crd to create your expected database.
+
+Using a separate crd for specifying respective docker images, and pod security policy names allow us to modify the images, and policies independent of KubeDB operator. This will also allow the users to use a custom image for the database.
+
+## ZooKeeperVersion Specification
+
+As with all other Kubernetes objects, a ZooKeeperVersion needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section.
+
+```yaml
+apiVersion: catalog.kubedb.com/v1alpha1
+kind: ZooKeeperVersion
+metadata:
+ annotations:
+ meta.helm.sh/release-name: kubedb
+ meta.helm.sh/release-namespace: kubedb
+ creationTimestamp: "2024-05-02T09:41:52Z"
+ generation: 1
+ labels:
+ app.kubernetes.io/instance: kubedb
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/name: kubedb-catalog
+ app.kubernetes.io/version: v2024.4.27
+ helm.sh/chart: kubedb-catalog-v2024.4.27
+ name: 3.9.1
+ resourceVersion: "1455"
+ uid: 3c5a4714-4ce2-4b41-8ad9-4899c3127dcc
+spec:
+ db:
+ image: ghcr.io/appscode-images/zookeeper:3.9.1
+ initContainer:
+ image: ghcr.io/kubedb/zookeeper-init:3.7-v1
+ securityContext:
+ runAsUser: 1000
+ version: 3.9.1
+```
+
+### metadata.name
+
+`metadata.name` is a required field that specifies the name of the `ZooKeeperVersion` crd. You have to specify this name in `spec.version` field of [ZooKeeper](/docs/v2024.4.27/guides/zookeeper/concepts/zookeeper) crd.
+
+
+### spec.version
+
+`spec.version` is a required field that specifies the original version of ZooKeeper server that has been used to build the docker image specified in `spec.db.image` field.
+
+### spec.deprecated
+
+`spec.deprecated` is an optional field that specifies whether the docker images specified here is supported by the current KubeDB operator.
+
+The default value of this field is `false`. If `spec.deprecated` is set to `true`, KubeDB operator will skip processing this CRD object and will add a event to the CRD object specifying that the DB version is deprecated.
+
+### spec.db.image
+
+`spec.db.image` is a required field that specifies the docker image which will be used to create Statefulset by KubeDB operator to create expected ZooKeeper server.
+
+### spec.initContainer.image
+
+`spec.initContainer.image` is a required field that specifies the image for init container.
+
+### spec.exporter.image
+
+`spec.exporter.image` is a required field that specifies the image which will be used to export Prometheus metrics.
+
+### spec.stash
+
+This holds the Backup & Restore task definitions, where a `TaskRef` has a `Name` & `Params` section. Params specifies a list of parameters to pass to the task.
+To learn more, visit [stash documentation](https://stash.run/)
+
+### spec.updateConstraints
+
+updateConstraints specifies the constraints that need to be considered during version update. Here `allowList` contains the versions those are allowed for updating from the current version.
+An empty list of AllowList indicates all the versions are accepted except the denyList.
+On the other hand, `DenyList` contains all the rejected versions for the update request. An empty list indicates no version is rejected.
+
+### spec.podSecurityPolicies.databasePolicyName
+
+`spec.podSecurityPolicies.databasePolicyName` is a required field that specifies the name of the pod security policy required to get the database server pod(s) running. To use a user-defined policy, the name of the policy has to be set in `spec.podSecurityPolicies` and in the list of allowed policy names in KubeDB operator like below:
+
+```bash
+helm upgrade -i kubedb oci://ghcr.io/appscode-charts/kubedb \
+ --namespace kubedb --create-namespace \
+ --set additionalPodSecurityPolicies[0]=custom-db-policy \
+ --set-file global.license=/path/to/the/license.txt \
+ --set gloabal.featureGates.ZooKeeper=true \
+ --wait --burst-limit=10000 --debug
+```
+
+## Next Steps
+
+- Learn about ZooKeeper crd [here](/docs/v2024.4.27/guides/zookeeper/concepts/zookeeper).
+- Deploy your first ZooKeeper server with KubeDB by following the guide [here](/docs/v2024.4.27/guides/zookeeper/quickstart/quickstart).
diff --git a/content/docs/v2024.4.27/guides/zookeeper/concepts/zookeeper.md b/content/docs/v2024.4.27/guides/zookeeper/concepts/zookeeper.md
new file mode 100644
index 0000000000..154c70ebdb
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/zookeeper/concepts/zookeeper.md
@@ -0,0 +1,304 @@
+---
+title: ZooKeeper CRD
+menu:
+ docs_v2024.4.27:
+ identifier: zk-zookeeper-concepts
+ name: ZooKeeper
+ parent: zk-concepts-zookeeper
+ weight: 10
+menu_name: docs_v2024.4.27
+section_menu_id: guides
+info:
+ autoscaler: v0.30.0
+ cli: v0.45.0
+ dashboard: v0.21.0
+ installer: v2024.4.27
+ ops-manager: v0.32.0
+ provisioner: v0.45.0
+ schema-manager: v0.21.0
+ ui-server: v0.21.0
+ version: v2024.4.27
+ webhook-server: v0.21.0
+---
+
+> New to KubeDB? Please start [here](/docs/v2024.4.27/README).
+
+# ZooKeeper
+
+## What is ZooKeeper
+
+`ZooKeeper` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [ZooKeeper](https://zookeeper.apache.org/) in a Kubernetes native way. You only need to describe the desired database configuration in a ZooKeeper object, and the KubeDB operator will create Kubernetes objects in the desired state for you.
+
+## ZooKeeper Spec
+
+As with all other Kubernetes objects, a ZooKeeper needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. Below is an example ZooKeeper object.
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: ZooKeeper
+metadata:
+ name: zk-ensemble
+ namespace: demo
+spec:
+ version: 3.9.1
+ replicas: 3
+ disableAuth: false
+ adminServerPort: 8080
+ authSecret:
+ name: zk-auth
+ externallyManaged: false
+ storage:
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ monitor:
+ agent: prometheus.io/operator
+ prometheus:
+ serviceMonitor:
+ labels:
+ app: kubedb
+ interval: 10s
+ configSecret:
+ name: zk-custom-config
+ podTemplate:
+ metadata:
+ annotations:
+ passMe: ToDatabasePod
+ controller:
+ annotations:
+ passMe: ToStatefulSet
+ spec:
+ serviceAccountName: my-service-account
+ schedulerName: my-scheduler
+ nodeSelector:
+ disktype: ssd
+ imagePullSecrets:
+ - name: myregistrykey
+ serviceTemplates:
+ - alias: primary
+ metadata:
+ annotations:
+ passMe: ToService
+ spec:
+ type: NodePort
+ ports:
+ - name: http
+ port: 9200
+ terminationPolicy: Halt
+ halted: false
+ healthChecker:
+ periodSeconds: 15
+ timeoutSeconds: 10
+ failureThreshold: 2
+ disableWriteCheck: false
+```
+
+
+### spec.version
+
+`spec.version` is a required field specifying the name of the [ZooKeeperVersion](/docs/v2024.4.27/guides/zookeeper/concepts/catalog) crd where the docker images are specified. Currently, when you install KubeDB, it creates the following `ZooKeeperVersion` crds,
+
+- `3.7.2`
+- `3.8.3`
+- `3.9.1`
+
+
+### spec.disableAuth
+
+`spec.disableAuth` is an optional field that decides whether ZooKeeper instance will be secured by auth or no.
+
+### spec.authSecret
+
+`spec.authSecret` is an optional field that points to a Secret used to hold credentials for `zookeeper` superuser. If not set, KubeDB operator creates a new Secret `{zookeeper-object-name}-auth` for storing the password for `zookeeper` superuser.
+
+We can use this field in 3 mode.
+
+1. Using an external secret. In this case, You need to create an auth secret first with required fields, then specify the secret name when creating the ZooKeeper object using `spec.authSecret.name` & set `spec.authSecret.externallyManaged` to true.
+```yaml
+authSecret:
+ name:
+ externallyManaged: true
+```
+2. Specifying the secret name only. In this case, You need to specify the secret name when creating the ZooKeeper object using `spec.authSecret.name`. `externallyManaged` is by default false.
+```yaml
+authSecret:
+ name:
+```
+
+3. Let KubeDB do everything for you. In this case, no work for you.
+
+AuthSecret contains a `username` key and a `password` key which contains the `username` and `password` respectively for `zookeeper` superuser.
+
+Example:
+
+```bash
+$ kubectl create secret generic zk-auth -n demo \
+--from-literal=username=jhon-doe \
+--from-literal=password=6q8u_2jMOW-OOZXk
+secret "zk-auth" created
+```
+
+```yaml
+apiVersion: v1
+data:
+ password: NnE4dV8yak1PVy1PT1pYaw==
+ username: amhvbi1kb2U=
+kind: Secret
+metadata:
+ name: zk-auth
+ namespace: demo
+type: Opaque
+```
+
+Secrets provided by users are not managed by KubeDB, and therefore, won't be modified or garbage collected by the KubeDB operator (version 0.13.0 and higher).
+
+
+### spec.storage
+
+If you set `spec.storageType:` to `Durable`, then `spec.storage` is a required field that specifies the StorageClass of PVCs dynamically allocated to store data for the database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. You can specify any StorageClass available in your cluster with appropriate resource requests.
+
+- `spec.storage.storageClassName` is the name of the StorageClass used to provision PVCs. PVCs don’t necessarily have to request a class. A PVC with its storageClassName set equal to "" is always interpreted to be requesting a PV with no class, so it can only be bound to PVs with no class (no annotation or one set equal to ""). A PVC with no storageClassName is not quite the same and is treated differently by the cluster depending on whether the DefaultStorageClass admission plugin is turned on.
+- `spec.storage.accessModes` uses the same conventions as Kubernetes PVCs when requesting storage with specific access modes.
+- `spec.storage.resources` can be used to request specific quantities of storage. This follows the same resource model used by PVCs.
+
+To learn how to configure `spec.storage`, please visit the links below:
+
+- https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims
+
+### spec.monitor
+
+ZooKeeper managed by KubeDB can be monitored with builtin-Prometheus and Prometheus operator out-of-the-box.
+
+
+### spec.configSecret
+
+`spec.configSecret` is an optional field that allows users to provide custom configuration for ZooKeeper. This field accepts a [`VolumeSource`](https://github.com/kubernetes/api/blob/release-1.11/core/v1/types.go#L47). So you can use any Kubernetes supported volume source such as `configMap`, `secret`, `azureDisk` etc.
+
+### spec.podTemplate
+
+KubeDB allows providing a template for database pod through `spec.podTemplate`. KubeDB operator will pass the information provided in `spec.podTemplate` to the StatefulSet created for ZooKeeper server.
+
+KubeDB accept following fields to set in `spec.podTemplate:`
+
+- metadata:
+ - annotations (pod's annotation)
+- controller:
+ - annotations (statefulset's annotation)
+- spec:
+ - args
+ - env
+ - resources
+ - initContainers
+ - imagePullSecrets
+ - nodeSelector
+ - affinity
+ - serviceAccountName
+ - schedulerName
+ - tolerations
+ - priorityClassName
+ - priority
+ - securityContext
+ - livenessProbe
+ - readinessProbe
+ - lifecycle
+
+You can check out the full list [here](https://github.com/kmodules/offshoot-api/blob/ea366935d5bad69d7643906c7556923271592513/api/v1/types.go#L42-L259).
+Uses of some field of `spec.podTemplate` is described below,
+
+#### spec.podTemplate.spec.args
+ `spec.podTemplate.spec.args` is an optional field. This can be used to provide additional arguments to database installation.
+
+### spec.podTemplate.spec.env
+
+`spec.podTemplate.spec.env` is an optional field that specifies the environment variables to pass to the ZooKeeper docker image.
+
+
+#### spec.podTemplate.spec.imagePullSecret
+
+`KubeDB` provides the flexibility of deploying ZooKeeper server from a private Docker registry.
+#### spec.podTemplate.spec.nodeSelector
+
+`spec.podTemplate.spec.nodeSelector` is an optional field that specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). To learn more, see [here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) .
+
+#### spec.podTemplate.spec.serviceAccountName
+
+ `serviceAccountName` is an optional field supported by KubeDB Operator (version 0.13.0 and higher) that can be used to specify a custom service account to fine tune role based access control.
+
+ If this field is left empty, the KubeDB operator will create a service account name matching ZooKeeper crd name. Role and RoleBinding that provide necessary access permissions will also be generated automatically for this service account.
+
+ If a service account name is given, but there's no existing service account by that name, the KubeDB operator will create one, and Role and RoleBinding that provide necessary access permissions will also be generated for this service account.
+
+ If a service account name is given, and there's an existing service account by that name, the KubeDB operator will use that existing service account. Since this service account is not managed by KubeDB, users are responsible for providing necessary access permissions manually.
+
+#### spec.podTemplate.spec.resources
+
+`spec.podTemplate.spec.resources` is an optional field. This can be used to request compute resources required by the database pods. To learn more, visit [here](http://kubernetes.io/docs/user-guide/compute-resources/).
+
+### spec.serviceTemplates
+
+You can also provide a template for the services created by KubeDB operator for ZooKeeper server through `spec.serviceTemplates`. This will allow you to set the type and other properties of the services.
+
+KubeDB allows following fields to set in `spec.serviceTemplates`:
+- `alias` represents the identifier of the service. It has the following possible value:
+ - `primary` is used for the primary service identification.
+ - `standby` is used for the secondary service identification.
+ - `stats` is used for the exporter service identification.
+
+- metadata:
+ - annotations
+- spec:
+ - type
+ - ports
+ - clusterIP
+ - externalIPs
+ - loadBalancerIP
+ - loadBalancerSourceRanges
+ - externalTrafficPolicy
+ - healthCheckNodePort
+ - sessionAffinityConfig
+
+See [here](https://github.com/kmodules/offshoot-api/blob/kubernetes-1.16.3/api/v1/types.go#L163) to understand these fields in detail.
+
+### spec.terminationPolicy
+
+`terminationPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `ZooKeeper` crd or which resources KubeDB should keep or delete when you delete `ZooKeeper` crd. KubeDB provides following four termination policies:
+
+- DoNotTerminate
+- Halt
+- Delete (`Default`)
+- WipeOut
+
+When `terminationPolicy` is `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` feature. If admission webhook is enabled, `DoNotTerminate` prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`.
+
+Following table show what KubeDB does when you delete ZooKeeper crd for different termination policies,
+
+| Behavior | DoNotTerminate | Halt | Delete | WipeOut |
+| ----------------------------------- | :------------: | :------: | :------: | :------: |
+| 1. Block Delete operation | ✓ | ✗ | ✗ | ✗ |
+| 2. Delete StatefulSet | ✗ | ✓ | ✓ | ✓ |
+| 3. Delete Services | ✗ | ✓ | ✓ | ✓ |
+| 4. Delete PVCs | ✗ | ✗ | ✓ | ✓ |
+| 5. Delete Secrets | ✗ | ✗ | ✗ | ✓ |
+| 6. Delete Snapshots | ✗ | ✗ | ✗ | ✓ |
+| 7. Delete Snapshot data from bucket | ✗ | ✗ | ✗ | ✓ |
+If you don't specify `spec.terminationPolicy` KubeDB uses `Delete` termination policy by default.
+
+### spec.halted
+Indicates that the database is halted and all offshoot Kubernetes resources except PVCs are deleted.
+
+## spec.healthChecker
+It defines the attributes for the health checker.
+- `spec.healthChecker.periodSeconds` specifies how often to perform the health check.
+- `spec.healthChecker.timeoutSeconds` specifies the number of seconds after which the probe times out.
+- `spec.healthChecker.failureThreshold` specifies minimum consecutive failures for the healthChecker to be considered failed.
+- `spec.healthChecker.disableWriteCheck` specifies whether to disable the writeCheck or not.
+
+Know details about KubeDB Health checking from this [blog post](https://appscode.com/blog/post/kubedb-health-checker/).
+
+## Next Steps
+
+- Learn how to use KubeDB to run a ZooKeeper server [here](/docs/v2024.4.27/guides/zookeeper/README).
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.4.27/CONTRIBUTING).
diff --git a/content/docs/v2024.4.27/guides/zookeeper/quickstart/_index.md b/content/docs/v2024.4.27/guides/zookeeper/quickstart/_index.md
new file mode 100644
index 0000000000..ce64f5192b
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/zookeeper/quickstart/_index.md
@@ -0,0 +1,22 @@
+---
+title: ZooKeeper Quickstart
+menu:
+ docs_v2024.4.27:
+ identifier: zk-quickstart-zookeeper
+ name: Quickstart
+ parent: zk-zookeeper-guides
+ weight: 15
+menu_name: docs_v2024.4.27
+info:
+ autoscaler: v0.30.0
+ cli: v0.45.0
+ dashboard: v0.21.0
+ installer: v2024.4.27
+ ops-manager: v0.32.0
+ provisioner: v0.45.0
+ schema-manager: v0.21.0
+ ui-server: v0.21.0
+ version: v2024.4.27
+ webhook-server: v0.21.0
+---
+
diff --git a/content/docs/v2024.4.27/guides/zookeeper/quickstart/quickstart.md b/content/docs/v2024.4.27/guides/zookeeper/quickstart/quickstart.md
new file mode 100644
index 0000000000..39a6966130
--- /dev/null
+++ b/content/docs/v2024.4.27/guides/zookeeper/quickstart/quickstart.md
@@ -0,0 +1,430 @@
+---
+title: ZooKeeper Quickstart
+menu:
+ docs_v2024.4.27:
+ identifier: zk-quickstart-quickstart
+ name: Overview
+ parent: zk-quickstart-zookeeper
+ weight: 10
+menu_name: docs_v2024.4.27
+section_menu_id: guides
+info:
+ autoscaler: v0.30.0
+ cli: v0.45.0
+ dashboard: v0.21.0
+ installer: v2024.4.27
+ ops-manager: v0.32.0
+ provisioner: v0.45.0
+ schema-manager: v0.21.0
+ ui-server: v0.21.0
+ version: v2024.4.27
+ webhook-server: v0.21.0
+---
+
+> New to KubeDB? Please start [here](/docs/v2024.4.27/README).
+
+# ZooKeeper QuickStart
+
+This tutorial will show you how to use KubeDB to run a ZooKeeper server.
+
+
+
+
+
+## Before You Begin
+
+- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/).
+
+- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/v2024.4.27/setup/README). Please set `global.featureGates.ZooKeeper=true`
+to install ZooKeeper CRDs.
+
+- [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) is required to run KubeDB. Check the available StorageClass in cluster.
+
+ ```bash
+ $ kubectl get storageclasses
+ NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
+ standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 20h
+ ```
+
+- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. Run the following command to prepare your cluster for this tutorial:
+
+ ```bash
+ $ kubectl create namespace demo
+ namespace/demo created
+
+ $ kubectl get namespaces
+ NAME STATUS AGE
+ demo Active 10s
+ ```
+
+> Note: The yaml files used in this tutorial are stored in [docs/examples](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs).
+
+## Find Available ZooKeeperVersions
+
+When you have installed KubeDB, it has created `ZooKeeperVersions` crd for all supported ZooKeeper versions. Check:
+
+```bash
+$ kubectl get zookeeperversions
+NAME VERSION DB_IMAGE DEPRECATED AGE
+3.7.2 3.7.2 ghcr.io/appscode-images/zookeeper:3.7.2 94s
+3.8.3 3.8.3 ghcr.io/appscode-images/zookeeper:3.8.3 94s
+3.9.1 3.9.1 ghcr.io/appscode-images/zookeeper:3.9.1 94s
+```
+
+## Create a ZooKeeper server
+
+KubeDB implements a `ZooKeeper` CRD to define the specification of a ZooKeeper server. Below is the `ZooKeeper` object created in this tutorial.
+
+```yaml
+apiVersion: kubedb.com/v1alpha2
+kind: ZooKeeper
+metadata:
+ name: zk-quickstart
+ namespace: demo
+spec:
+ version: "3.9.1"
+ adminServerPort: 8080
+ replicas: 3
+ storage:
+ resources:
+ requests:
+ storage: "1Gi"
+ storageClassName: "standard"
+ accessModes:
+ - ReadWriteOnce
+ terminationPolicy: "WipeOut"
+```
+
+```bash
+$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/zookeeper/quickstart/zoo.yaml
+zookeeper.kubedb.com/zk-quickstart created
+```
+
+Here,
+
+- `spec.version` is name of the ZooKeeperVersion crd where the docker images are specified. In this tutorial, a ZooKeeper 3.9.1 database is created.
+- `spec.storage` specifies PVC spec that will be dynamically allocated to store data for this database. This storage spec will be passed to the StatefulSet created by KubeDB operator to run database pods. You can specify any StorageClass available in your cluster with appropriate resource requests.
+- `spec.terminationPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `ZooKeeper` crd or which resources KubeDB should keep or delete when you delete `ZooKeeper` crd. If admission webhook is enabled, It prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`.
+
+> Note: `spec.storage` section is used to create PVC for database pod. It will create PVC with storage size specified in storage.resources.requests field. Don't specify limits here. PVC does not get resized automatically.
+
+KubeDB operator watches for `ZooKeeper` objects using Kubernetes api. When a `ZooKeeper` object is created, KubeDB operator will create a new StatefulSet and a Service with the matching ZooKeeper object name. KubeDB operator will also create a governing service for StatefulSets with the name `kubedb`, if one is not already present.
+
+```bash
+$ kubectl get zk -n demo
+NAME TYPE VERSION STATUS AGE
+zk-quickstart kubedb.com/v1alpha2 3.9.1 Ready 105s
+
+$ kubectl describe zk -n demo zk-quickstart
+Name: zk-quickstart
+Namespace: demo
+Labels:
+Annotations:
+API Version: kubedb.com/v1alpha2
+Kind: ZooKeeper
+Metadata:
+ Creation Timestamp: 2024-05-02T08:25:26Z
+ Finalizers:
+ kubedb.com
+ Generation: 3
+ Resource Version: 4219
+ UID: dd69e514-3049-4d08-8b57-92f8246dda35
+Spec:
+ Admin Server Port: 8080
+ Auth Secret:
+ Name: zk-quickstart-auth
+ Health Checker:
+ Failure Threshold: 3
+ Period Seconds: 20
+ Timeout Seconds: 10
+ Pod Placement Policy:
+ Name: default
+ Pod Template:
+ Controller:
+ Metadata:
+ Spec:
+ Containers:
+ Name: zookeeper
+ Resources:
+ Limits:
+ Memory: 1Gi
+ Requests:
+ Cpu: 500m
+ Memory: 1Gi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Group: 1000
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Init Containers:
+ Name: zookeeper-init
+ Resources:
+ Limits:
+ Memory: 512Mi
+ Requests:
+ Cpu: 200m
+ Memory: 512Mi
+ Security Context:
+ Allow Privilege Escalation: false
+ Capabilities:
+ Drop:
+ ALL
+ Run As Group: 1000
+ Run As Non Root: true
+ Run As User: 1000
+ Seccomp Profile:
+ Type: RuntimeDefault
+ Security Context:
+ Fs Group: 1000
+ Replicas: 3
+ Storage:
+ Access Modes:
+ ReadWriteOnce
+ Resources:
+ Requests:
+ Storage: 1Gi
+ Storage Class Name: standard
+ Termination Policy: WipeOut
+ Version: 3.9.1
+Status:
+ Conditions:
+ Last Transition Time: 2024-05-02T08:25:26Z
+ Message: The KubeDB operator has started the provisioning of ZooKeeper: demo/zk-quickstart
+ Observed Generation: 1
+ Reason: DatabaseProvisioningStartedSuccessfully
+ Status: True
+ Type: ProvisioningStarted
+ Last Transition Time: 2024-05-02T08:25:50Z
+ Message: All replicas are ready for ZooKeeper demo/zk-quickstart
+ Observed Generation: 3
+ Reason: AllReplicasReady
+ Status: True
+ Type: ReplicaReady
+ Last Transition Time: 2024-05-02T08:26:10Z
+ Message: The ZooKeeper: demo/zk-quickstart is accepting connection requests.
+ Observed Generation: 3
+ Reason: DatabaseAcceptingConnectionRequest
+ Status: True
+ Type: AcceptingConnection
+ Last Transition Time: 2024-05-02T08:26:10Z
+ Message: The ZooKeeper: demo/zk-quickstart is ready.
+ Observed Generation: 3
+ Reason: ReadinessCheckSucceeded
+ Status: True
+ Type: Ready
+ Last Transition Time: 2024-05-02T08:26:13Z
+ Message: ZooKeeper: demo/zk-quickstart is successfully provisioned.
+ Observed Generation: 3
+ Reason: DatabaseSuccessfullyProvisioned
+ Status: True
+ Type: Provisioned
+ Phase: Ready
+Events:
+
+
+$ kubectl get petset -n demo
+NAME AGE
+zk-quickstart 3m14s
+
+
+$ kubectl get pvc -n demo
+NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+zk-quickstart-data-zk-quickstart-0 Bound pvc-1e1850b8-4e5c-418c-a722-89df98f28998 1Gi RWO standard 3m40s
+zk-quickstart-data-zk-quickstart-1 Bound pvc-e2bb4b02-b138-4589-9e43-bcaf599b6513 1Gi RWO standard 3m31s
+zk-quickstart-data-zk-quickstart-2 Bound pvc-988ab6b2-e5ed-4c75-8418-31186bd1d3db 1Gi RWO standard 3m25s
+
+
+$ kubectl get pv -n demo
+NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
+pvc-1e1850b8-4e5c-418c-a722-89df98f28998 1Gi RWO Delete Bound demo/zk-quickstart-data-zk-quickstart-0 standard 3m52s
+pvc-988ab6b2-e5ed-4c75-8418-31186bd1d3db 1Gi RWO Delete Bound demo/zk-quickstart-data-zk-quickstart-2 standard 3m40s
+pvc-e2bb4b02-b138-4589-9e43-bcaf599b6513 1Gi RWO Delete Bound demo/zk-quickstart-data-zk-quickstart-1 standard 3m46s
+
+
+$ kubectl get service -n demo
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+zk-quickstart ClusterIP 10.96.26.38 2181/TCP 4m15s
+zk-quickstart-admin-server ClusterIP 10.96.49.134 8080/TCP 4m15s
+zk-quickstart-pods ClusterIP None 2181/TCP,2888/TCP,3888/TCP 4m15s
+```
+
+KubeDB operator sets the `status.phase` to `Ready` once the database is successfully created. Run the following command to see the modified ZooKeeper object:
+
+```bash
+$ kubectl get zk -n demo zk-quickstart -o yaml
+apiVersion: kubedb.com/v1alpha2
+kind: ZooKeeper
+metadata:
+ annotations:
+ kubectl.kubernetes.io/last-applied-configuration: |
+ {"apiVersion":"kubedb.com/v1alpha2","kind":"ZooKeeper","metadata":{"annotations":{},"name":"zk-quickstart","namespace":"demo"},"spec":{"replicas":3,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"terminationPolicy":"WipeOut","version":"3.9.1"}}
+ creationTimestamp: "2024-05-02T08:25:26Z"
+ finalizers:
+ - kubedb.com
+ generation: 3
+ name: zk-quickstart
+ namespace: demo
+ resourceVersion: "4219"
+ uid: dd69e514-3049-4d08-8b57-92f8246dda35
+spec:
+ adminServerPort: 8080
+ authSecret:
+ name: zk-quickstart-auth
+ healthChecker:
+ failureThreshold: 3
+ periodSeconds: 20
+ timeoutSeconds: 10
+ podPlacementPolicy:
+ name: default
+ podTemplate:
+ controller: {}
+ metadata: {}
+ spec:
+ containers:
+ - name: zookeeper
+ resources:
+ limits:
+ memory: 1Gi
+ requests:
+ cpu: 500m
+ memory: 1Gi
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - ALL
+ runAsGroup: 1000
+ runAsNonRoot: true
+ runAsUser: 1000
+ seccompProfile:
+ type: RuntimeDefault
+ initContainers:
+ - name: zookeeper-init
+ resources:
+ limits:
+ memory: 512Mi
+ requests:
+ cpu: 200m
+ memory: 512Mi
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - ALL
+ runAsGroup: 1000
+ runAsNonRoot: true
+ runAsUser: 1000
+ seccompProfile:
+ type: RuntimeDefault
+ securityContext:
+ fsGroup: 1000
+ replicas: 3
+ storage:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+ storageClassName: standard
+ terminationPolicy: WipeOut
+ version: 3.9.1
+status:
+ conditions:
+ - lastTransitionTime: "2024-05-02T08:25:26Z"
+ message: 'The KubeDB operator has started the provisioning of ZooKeeper: demo/zk-quickstart'
+ observedGeneration: 1
+ reason: DatabaseProvisioningStartedSuccessfully
+ status: "True"
+ type: ProvisioningStarted
+ - lastTransitionTime: "2024-05-02T08:25:50Z"
+ message: All replicas are ready for ZooKeeper demo/zk-quickstart
+ observedGeneration: 3
+ reason: AllReplicasReady
+ status: "True"
+ type: ReplicaReady
+ - lastTransitionTime: "2024-05-02T08:26:10Z"
+ message: 'The ZooKeeper: demo/zk-quickstart is accepting connection requests.'
+ observedGeneration: 3
+ reason: DatabaseAcceptingConnectionRequest
+ status: "True"
+ type: AcceptingConnection
+ - lastTransitionTime: "2024-05-02T08:26:10Z"
+ message: 'The ZooKeeper: demo/zk-quickstart is ready.'
+ observedGeneration: 3
+ reason: ReadinessCheckSucceeded
+ status: "True"
+ type: Ready
+ - lastTransitionTime: "2024-05-02T08:26:13Z"
+ message: 'ZooKeeper: demo/zk-quickstart is successfully provisioned.'
+ observedGeneration: 3
+ reason: DatabaseSuccessfullyProvisioned
+ status: "True"
+ type: Provisioned
+ phase: Ready
+```
+
+Now, you can connect to this database using created service. In this tutorial, we are connecting to the ZooKeeper server from inside of pod.
+
+```bash
+$ kubectl exec -it -n demo zk-quickstart-0 -- sh
+
+$ echo ruok | nc localhost 2181
+imok
+
+$ zkCli.sh create /hello-dir hello-messege
+Connecting to localhost:2181
+...
+Connection Log Messeges
+...
+Created /hello-dir
+
+$ zkCli.sh get /hello-dir
+Connecting to localhost:2181
+...
+Connection Log Messeges
+...
+hello-messege
+```
+
+## DoNotTerminate Property
+
+When `terminationPolicy` is `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` feature. If admission webhook is enabled, It prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`. You can see this below:
+
+```bash
+$ kubectl delete zk zk-quickstart -n demo
+Error from server (BadRequest): admission webhook "zookeeper.validators.kubedb.com" denied the request: zookeeper "zookeeper-quickstart" can't be deleted. To delete, change spec.terminationPolicy
+```
+
+Now, run `kubectl edit zk zookeeper-quickstart -n demo` to set `spec.terminationPolicy` to `Halt` . Then you will be able to delete/halt the database.
+
+
+## Cleaning up
+
+To clean up the Kubernetes resources created by this tutorial, run:
+
+```bash
+
+$ kubectl patch -n demo zk/zk-quickstart -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge"
+zookeeper.kubedb.com/zk-quickstart patched
+
+$ kubectl delete -n demo zk/zk-quickstart
+zookeeper.kubedb.com "zk-quickstart" deleted
+
+$ kubectl delete ns demo
+namespace "demo" deleted
+```
+
+## Tips for Testing
+
+If you are just testing some basic functionalities, you might want to avoid additional hassles due to some safety features that are great for production environment. You can follow these tips to avoid them.
+
+**Use `terminationPolicy: WipeOut`**. It is nice to be able to resume database from previous one.So, we preserve all your `PVCs`, auth `Secrets`. If you don't want to resume database, you can just use `spec.terminationPolicy: WipeOut`. It will delete everything created by KubeDB for a particular ZooKeeper crd when you delete the crd.
+
+## Next Steps
+
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/v2024.4.27/CONTRIBUTING).
diff --git a/content/docs/v2024.4.27/images/druid/Druid-CRD-Lifecycle.png b/content/docs/v2024.4.27/images/druid/Druid-CRD-Lifecycle.png
new file mode 100644
index 0000000000..c1968f34a5
Binary files /dev/null and b/content/docs/v2024.4.27/images/druid/Druid-CRD-Lifecycle.png differ
diff --git a/content/docs/v2024.4.27/images/druid/Druid-Web-Console.png b/content/docs/v2024.4.27/images/druid/Druid-Web-Console.png
new file mode 100644
index 0000000000..2565c0fb92
Binary files /dev/null and b/content/docs/v2024.4.27/images/druid/Druid-Web-Console.png differ
diff --git a/content/docs/v2024.4.27/images/kafka/Kafka-CRD-Lifecycle.png b/content/docs/v2024.4.27/images/kafka/Kafka-CRD-Lifecycle.png
index e47e09467e..70e5cf996a 100644
Binary files a/content/docs/v2024.4.27/images/kafka/Kafka-CRD-Lifecycle.png and b/content/docs/v2024.4.27/images/kafka/Kafka-CRD-Lifecycle.png differ
diff --git a/content/docs/v2024.4.27/images/kafka/connectcluster/connectcluster-crd-lifecycle.png b/content/docs/v2024.4.27/images/kafka/connectcluster/connectcluster-crd-lifecycle.png
new file mode 100644
index 0000000000..d5223a6712
Binary files /dev/null and b/content/docs/v2024.4.27/images/kafka/connectcluster/connectcluster-crd-lifecycle.png differ
diff --git a/content/docs/v2024.4.27/images/pgpool/quickstart/lifecycle.png b/content/docs/v2024.4.27/images/pgpool/quickstart/lifecycle.png
new file mode 100644
index 0000000000..c8e76aa06f
Binary files /dev/null and b/content/docs/v2024.4.27/images/pgpool/quickstart/lifecycle.png differ
diff --git a/content/docs/v2024.4.27/images/singlestore/singlestore-lifecycle.png b/content/docs/v2024.4.27/images/singlestore/singlestore-lifecycle.png
new file mode 100644
index 0000000000..edf0f03569
Binary files /dev/null and b/content/docs/v2024.4.27/images/singlestore/singlestore-lifecycle.png differ
diff --git a/content/docs/v2024.4.27/images/zookeeper/zookeeper-lifecycle.png b/content/docs/v2024.4.27/images/zookeeper/zookeeper-lifecycle.png
new file mode 100644
index 0000000000..d9655ed70a
Binary files /dev/null and b/content/docs/v2024.4.27/images/zookeeper/zookeeper-lifecycle.png differ
diff --git a/data/products/appscode.json b/data/products/appscode.json
index e93d7eac09..389e00a92b 100644
--- a/data/products/appscode.json
+++ b/data/products/appscode.json
@@ -12,7 +12,7 @@
"themeColor": ""
},
"heroImage": {
- "src": "/assets/images/products/appscode/appscode-hero.png",
+ "src": "/assets/images/products/appscode/appscode-hero.webp",
"alt": "AppsCode"
},
"logo": {
@@ -47,7 +47,7 @@
"summary": "Our Mission is to Accelerate the transition to Containers by building a Kubernetes-native Data Platform",
"get_in_touch_url": "https://appscode.com/contact/",
"intro": "8az5rYUxyGs",
- "hero_image": "/assets/images/products/appscode/appscode-hero.png"
+ "hero_image": "/assets/images/products/appscode/appscode-hero.webp"
},
"main_products": [
{
diff --git a/static/assets/images/customers/persons/daniel_gormly.png b/static/assets/images/customers/persons/daniel_gormly.png
index 06c1928cb0..14257286c9 100644
Binary files a/static/assets/images/customers/persons/daniel_gormly.png and b/static/assets/images/customers/persons/daniel_gormly.png differ
diff --git a/static/assets/images/customers/persons/dario_freddi.jpeg b/static/assets/images/customers/persons/dario_freddi.jpeg
index 68b43f9317..44e9ea2cf4 100644
Binary files a/static/assets/images/customers/persons/dario_freddi.jpeg and b/static/assets/images/customers/persons/dario_freddi.jpeg differ
diff --git a/static/assets/images/customers/persons/dario_freddi.png b/static/assets/images/customers/persons/dario_freddi.png
index 6aa8580215..c6da13e6e2 100644
Binary files a/static/assets/images/customers/persons/dario_freddi.png and b/static/assets/images/customers/persons/dario_freddi.png differ
diff --git a/static/assets/images/customers/persons/luca_ravazzolo.png b/static/assets/images/customers/persons/luca_ravazzolo.png
index 013d70e6eb..6a554a6bcf 100644
Binary files a/static/assets/images/customers/persons/luca_ravazzolo.png and b/static/assets/images/customers/persons/luca_ravazzolo.png differ
diff --git "a/static/assets/images/customers/persons/manuel_ram\303\255rez_l\303\263pez.jpeg" "b/static/assets/images/customers/persons/manuel_ram\303\255rez_l\303\263pez.jpeg"
index 965ff0f1aa..8a3598364c 100644
Binary files "a/static/assets/images/customers/persons/manuel_ram\303\255rez_l\303\263pez.jpeg" and "b/static/assets/images/customers/persons/manuel_ram\303\255rez_l\303\263pez.jpeg" differ
diff --git "a/static/assets/images/customers/persons/manuel_ram\303\255rez_l\303\263pez.png" "b/static/assets/images/customers/persons/manuel_ram\303\255rez_l\303\263pez.png"
index 32df1eda5f..d2f9a80142 100644
Binary files "a/static/assets/images/customers/persons/manuel_ram\303\255rez_l\303\263pez.png" and "b/static/assets/images/customers/persons/manuel_ram\303\255rez_l\303\263pez.png" differ
diff --git a/static/assets/images/customers/persons/mario_kozjak.jpg b/static/assets/images/customers/persons/mario_kozjak.jpg
index fec2c24262..ff4bc4d427 100644
Binary files a/static/assets/images/customers/persons/mario_kozjak.jpg and b/static/assets/images/customers/persons/mario_kozjak.jpg differ
diff --git a/static/assets/images/customers/persons/mario_kozjak.png b/static/assets/images/customers/persons/mario_kozjak.png
index 0300378524..68d37be8ab 100644
Binary files a/static/assets/images/customers/persons/mario_kozjak.png and b/static/assets/images/customers/persons/mario_kozjak.png differ
diff --git "a/static/assets/images/customers/persons/richer_larivi\303\250re.jpeg" "b/static/assets/images/customers/persons/richer_larivi\303\250re.jpeg"
index 708a050c94..79970cc9ce 100644
Binary files "a/static/assets/images/customers/persons/richer_larivi\303\250re.jpeg" and "b/static/assets/images/customers/persons/richer_larivi\303\250re.jpeg" differ
diff --git "a/static/assets/images/customers/persons/richer_larivi\303\250re.png" "b/static/assets/images/customers/persons/richer_larivi\303\250re.png"
index be60b65efd..d4a2829c87 100644
Binary files "a/static/assets/images/customers/persons/richer_larivi\303\250re.png" and "b/static/assets/images/customers/persons/richer_larivi\303\250re.png" differ
diff --git a/static/assets/images/gradient-bg.jpg b/static/assets/images/gradient-bg.jpg
new file mode 100644
index 0000000000..16c67e5e07
Binary files /dev/null and b/static/assets/images/gradient-bg.jpg differ
diff --git a/static/assets/images/gradient-bg.webp b/static/assets/images/gradient-bg.webp
new file mode 100644
index 0000000000..da500f32c7
Binary files /dev/null and b/static/assets/images/gradient-bg.webp differ
diff --git a/static/assets/images/products/appscode/icons/global/commission.png b/static/assets/images/products/appscode/icons/global/commission.png
new file mode 100644
index 0000000000..da4f557e61
Binary files /dev/null and b/static/assets/images/products/appscode/icons/global/commission.png differ
diff --git a/static/assets/images/products/appscode/icons/global/expand.png b/static/assets/images/products/appscode/icons/global/expand.png
new file mode 100644
index 0000000000..93e0ffdf97
Binary files /dev/null and b/static/assets/images/products/appscode/icons/global/expand.png differ
diff --git a/static/assets/images/products/appscode/icons/global/handshake.svg b/static/assets/images/products/appscode/icons/global/handshake.svg
new file mode 100644
index 0000000000..fe0700a790
--- /dev/null
+++ b/static/assets/images/products/appscode/icons/global/handshake.svg
@@ -0,0 +1,14 @@
+
diff --git a/static/assets/images/products/appscode/icons/global/resources.png b/static/assets/images/products/appscode/icons/global/resources.png
new file mode 100644
index 0000000000..7f468358d5
Binary files /dev/null and b/static/assets/images/products/appscode/icons/global/resources.png differ
diff --git a/static/assets/images/products/stash/stash-hero.png b/static/assets/images/products/stash/stash-hero.png
index 475c2042c3..afb0d6e3ad 100644
Binary files a/static/assets/images/products/stash/stash-hero.png and b/static/assets/images/products/stash/stash-hero.png differ
diff --git a/static/assets/images/products/voyager/voyager-hero.png b/static/assets/images/products/voyager/voyager-hero.png
index 6e24de7ced..b079657207 100644
Binary files a/static/assets/images/products/voyager/voyager-hero.png and b/static/assets/images/products/voyager/voyager-hero.png differ
diff --git a/static/assets/images/shape/decorative-el-large-2.png b/static/assets/images/shape/decorative-el-large-2.png
new file mode 100644
index 0000000000..5002353058
Binary files /dev/null and b/static/assets/images/shape/decorative-el-large-2.png differ
diff --git a/static/assets/images/shape/decorative-el-large.png b/static/assets/images/shape/decorative-el-large.png
new file mode 100644
index 0000000000..3924a94a43
Binary files /dev/null and b/static/assets/images/shape/decorative-el-large.png differ
diff --git a/static/assets/images/shape/decorative-el.svg b/static/assets/images/shape/decorative-el.svg
new file mode 100644
index 0000000000..d22978b442
--- /dev/null
+++ b/static/assets/images/shape/decorative-el.svg
@@ -0,0 +1,7 @@
+
diff --git a/static/assets/images/shape/hexagon.svg b/static/assets/images/shape/hexagon.svg
new file mode 100644
index 0000000000..934b72cc1f
--- /dev/null
+++ b/static/assets/images/shape/hexagon.svg
@@ -0,0 +1,10 @@
+