diff --git a/content/docs/v2024.9.30/guides/druid/backup/application-level/index.md b/content/docs/v2024.9.30/guides/druid/backup/application-level/index.md index afb3e78f49..c2a0cc54ef 100644 --- a/content/docs/v2024.9.30/guides/druid/backup/application-level/index.md +++ b/content/docs/v2024.9.30/guides/druid/backup/application-level/index.md @@ -251,7 +251,7 @@ spec: version: 30.0.0 ``` -KubeStash uses the `AppBinding` CR to connect with the target database. It requires the following two fields to set in AppBinding's `.spec` section. +KubeStash uses the `AppBinding` CR to connect with the target database. It requires the following two fields to be set in AppBinding's `.spec` section. - `.spec.clientConfig.service.name` specifies the name of the Service that connects to the database. - `.spec.secret` specifies the name of the Secret that holds necessary credentials to access the database. @@ -800,4 +800,4 @@ kubectl delete retentionpolicies.storage.kubestash.com -n demo demo-retention kubectl delete restoresessions.core.kubestash.com -n demo restore-sample-druid kubectl delete druid -n demo sample-druid kubectl delete druid -n dev restored-druid -``` \ No newline at end of file +``` diff --git a/content/docs/v2024.9.30/guides/druid/backup/auto-backup/index.md b/content/docs/v2024.9.30/guides/druid/backup/auto-backup/index.md index cf065ca2d3..92c28735cd 100644 --- a/content/docs/v2024.9.30/guides/druid/backup/auto-backup/index.md +++ b/content/docs/v2024.9.30/guides/druid/backup/auto-backup/index.md @@ -197,7 +197,7 @@ spec: Here, -- `.spec.backupConfigurationTemplate.backends[*].storageRef` refers our earlier created `gcs-storage` backupStorage. +- `.spec.backupConfigurationTemplate.backends[*].storageRef` refers to our earlier created `gcs-storage` backupStorage. - `.spec.backupConfigurationTemplate.sessions[*].schedule` specifies that we want to backup the database at `5 minutes` interval. Let's create the `BackupBlueprint` we have shown above, @@ -207,7 +207,7 @@ $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" > backupblueprint.core.kubestash.com/druid-default-backup-blueprint created ``` -Now, we are ready to backup our `Druid` databases using few annotations. +Now, we are ready to backup our `Druid` databases using a few annotations. ## Deploy Sample Druid Database @@ -825,4 +825,4 @@ kubectl delete secret -n demo encrypt-secret kubectl delete retentionpolicies.storage.kubestash.com -n demo demo-retention kubectl delete druid -n demo sample-druid kubectl delete druid -n demo sample-druid-2 -``` \ No newline at end of file +``` diff --git a/content/docs/v2024.9.30/guides/druid/concepts/appbinding.md b/content/docs/v2024.9.30/guides/druid/concepts/appbinding.md index 07c119bdf0..dad084275b 100644 --- a/content/docs/v2024.9.30/guides/druid/concepts/appbinding.md +++ b/content/docs/v2024.9.30/guides/druid/concepts/appbinding.md @@ -29,7 +29,7 @@ info: An `AppBinding` is a Kubernetes `CustomResourceDefinition`(CRD) which points to an application using either its URL (usually for a non-Kubernetes resident service instance) or a Kubernetes service object (if self-hosted in a Kubernetes cluster), some optional parameters and a credential secret. To learn more about AppBinding and the problems it solves, please read this blog post: [The case for AppBinding](https://appscode.com/blog/post/the-case-for-appbinding). -If you deploy a database using [KubeDB](https://kubedb.com/docs/0.11.0/concepts/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. +If you deploy a database using [KubeDB](https://kubedb.com/docs/latest/welcome/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. KubeDB uses [Stash](https://appscode.com/products/stash/) to perform backup/recovery of databases. Stash needs to know how to connect with a target database and the credentials necessary to access it. This is done via an `AppBinding`. diff --git a/content/docs/v2024.9.30/guides/mariadb/concepts/appbinding/index.md b/content/docs/v2024.9.30/guides/mariadb/concepts/appbinding/index.md index 511a40bd33..efb941e9ea 100644 --- a/content/docs/v2024.9.30/guides/mariadb/concepts/appbinding/index.md +++ b/content/docs/v2024.9.30/guides/mariadb/concepts/appbinding/index.md @@ -29,7 +29,7 @@ info: An `AppBinding` is a Kubernetes `CustomResourceDefinition`(CRD) which points to an application using either its URL (usually for a non-Kubernetes resident service instance) or a Kubernetes service object (if self-hosted in a Kubernetes cluster), some optional parameters and a credential secret. To learn more about AppBinding and the problems it solves, please read this blog post: [The case for AppBinding](https://appscode.com/blog/post/the-case-for-appbinding). -If you deploy a database using [KubeDB](https://kubedb.com/docs/0.11.0/concepts/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. +If you deploy a database using [KubeDB](https://kubedb.com/docs/latest/welcome/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. KubeDB uses [Stash](https://appscode.com/products/stash/) to perform backup/recovery of databases. Stash needs to know how to connect with a target database and the credentials necessary to access it. This is done via an `AppBinding`. diff --git a/content/docs/v2024.9.30/guides/memcached/concepts/appbinding.md b/content/docs/v2024.9.30/guides/memcached/concepts/appbinding.md index fde06fdeda..130f7dcf28 100644 --- a/content/docs/v2024.9.30/guides/memcached/concepts/appbinding.md +++ b/content/docs/v2024.9.30/guides/memcached/concepts/appbinding.md @@ -29,7 +29,7 @@ info: An `AppBinding` is a Kubernetes `CustomResourceDefinition`(CRD) which points to an application using either its URL (usually for a non-Kubernetes resident service instance) or a Kubernetes service object (if self-hosted in a Kubernetes cluster), some optional parameters and a credential secret. To learn more about AppBinding and the problems it solves, please read this blog post: [The case for AppBinding](https://appscode.com/blog/post/the-case-for-appbinding). -If you deploy a database using [KubeDB](https://kubedb.com/docs/0.11.0/concepts/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. +If you deploy a database using [KubeDB](https://kubedb.com/docs/latest/welcome/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. KubeDB uses [Stash](https://appscode.com/products/stash/) to perform backup/recovery of databases. Stash needs to know how to connect with a target database and the credentials necessary to access it. This is done via an `AppBinding`. diff --git a/content/docs/v2024.9.30/guides/mongodb/backup/kubestash/application-level/index.md b/content/docs/v2024.9.30/guides/mongodb/backup/kubestash/application-level/index.md index 36a4375e48..240e16019d 100644 --- a/content/docs/v2024.9.30/guides/mongodb/backup/kubestash/application-level/index.md +++ b/content/docs/v2024.9.30/guides/mongodb/backup/kubestash/application-level/index.md @@ -250,7 +250,7 @@ We are going to store our backed up data into a `S3` bucket. At first, we need t Let's create a secret called `s3-secret` with access credentials to our desired S3 bucket, -```console +```bash $ echo -n '' > AWS_ACCESS_KEY_ID $ echo -n '' > AWS_SECRET_ACCESS_KEY $ kubectl create secret generic -n demo s3-secret \ @@ -301,7 +301,7 @@ We have to create a `BackupConfiguration` targeting respective MongoDB crd of ou EncryptionSecret refers to the Secret containing the encryption key which will be used to encode/decode the backed up data. Let's create a secret called `encry-secret` -```console +```bash $ kubectl create secret generic encry-secret -n demo \ --from-literal=RESTIC_PASSWORD='123' -n demo secret/encry-secret created @@ -328,7 +328,7 @@ spec: Let's create the RetentionPolicy we have shown above, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/kubestash/application-level/examples/retentionpolicy.yaml retentionpolicy.storage.kubestash.com/backup-rp created ``` diff --git a/content/docs/v2024.9.30/guides/mongodb/backup/kubestash/auto-backup/index.md b/content/docs/v2024.9.30/guides/mongodb/backup/kubestash/auto-backup/index.md index 7f070a0480..8d4d1aa9b3 100644 --- a/content/docs/v2024.9.30/guides/mongodb/backup/kubestash/auto-backup/index.md +++ b/content/docs/v2024.9.30/guides/mongodb/backup/kubestash/auto-backup/index.md @@ -47,7 +47,7 @@ You should be familiar with the following `KubeStash` concepts: To keep things isolated, we are going to use a separate namespace called `demo` throughout this tutorial. Create `demo` namespace if you haven't created yet. -```console +```bash $ kubectl create ns demo namespace/demo created ``` @@ -64,7 +64,7 @@ We are going to store our backed up data into a S3 bucket. At first, we need to Let's create a secret called `s3-secret` with access credentials to our desired S3 bucket, -```console +```bash $ echo -n '' > AWS_ACCESS_KEY_ID $ echo -n '' > AWS_SECRET_ACCESS_KEY $ kubectl create secret generic -n demo s3-secret \ @@ -100,7 +100,7 @@ spec: Let's create the `BackupStorage` we have shown above, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/kubestash/auto-backup/examples/backupstorage.yaml backupstorage.storage.kubestash.com/s3-storage created ``` @@ -110,7 +110,7 @@ We also need to create an secret for encrypt data and retention policy for `Back EncryptionSecret refers to the Secret containing the encryption key which will be used to encode/decode the backed up data. Let's create a secret called `encry-secret` -```console +```bash $ kubectl create secret generic encry-secret -n demo \ --from-literal=RESTIC_PASSWORD='123' -n demo secret/encry-secret created @@ -137,7 +137,7 @@ spec: Let's create the RetentionPolicy we have shown above, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/kubestash/auto-backup/examples/retentionpolicy.yaml retentionpolicy.storage.kubestash.com/backup-rp created ``` diff --git a/content/docs/v2024.9.30/guides/mongodb/backup/kubestash/logical/replicaset/index.md b/content/docs/v2024.9.30/guides/mongodb/backup/kubestash/logical/replicaset/index.md index e6cde0317a..f51c514190 100644 --- a/content/docs/v2024.9.30/guides/mongodb/backup/kubestash/logical/replicaset/index.md +++ b/content/docs/v2024.9.30/guides/mongodb/backup/kubestash/logical/replicaset/index.md @@ -44,7 +44,7 @@ You have to be familiar with following custom resources: To keep things isolated, we are going to use a separate namespace called `demo` throughout this tutorial. Create `demo` namespace if you haven't created yet. -```console +```bash $ kubectl create ns demo namespace/demo created ``` @@ -84,7 +84,7 @@ spec: Create the above `MongoDB` crd, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/kubestash/logical/replicaset/examples/mongodb-replicaset.yaml mongodb.kubedb.com/sample-mg-rs created ``` @@ -93,7 +93,7 @@ KubeDB will deploy a MongoDB database according to the above specification. It w Let's check if the database is ready to use, -```console +```bash $ kubectl get mongodb -n demo sample-mg-rs NAME VERSION STATUS AGE sample-mg-rs 4.2.24 Ready 2m27s @@ -101,7 +101,7 @@ sample-mg-rs 4.2.24 Ready 2m27s The database is `Ready`. Verify that KubeDB has created a Secret and a Service for this database using the following commands, -```console +```bash $ kubectl get secret -n demo -l=app.kubernetes.io/instance=sample-mg-rs NAME TYPE DATA AGE sample-mg-rs-auth kubernetes.io/basic-auth 2 3m53s @@ -121,7 +121,7 @@ Here, we have to use service `sample-mg-rs` and secret `sample-mg-rs-auth` to co For simplicity, we are going to exec into the database pod and create some sample data. At first, find out the database pod using the following command, -```console +```bash $ kubectl get pods -n demo --selector="app.kubernetes.io/instance=sample-mg-rs" NAME READY STATUS RESTARTS AGE sample-mg-rs-0 2/2 Running 0 6m15s @@ -131,7 +131,7 @@ sample-mg-rs-2 2/2 Running 0 5m14s Now, let's exec into the pod and create a table, -```console +```bash $ export USER=$(kubectl get secrets -n demo sample-mg-rs-auth -o jsonpath='{.data.\username}' | base64 -d) $ export PASSWORD=$(kubectl get secrets -n demo sample-mg-rs-auth -o jsonpath='{.data.\password}' | base64 -d) @@ -186,7 +186,7 @@ We are going to store our backed up data into a S3 bucket. At first, we need to Let's create a secret called `s3-secret` with access credentials to our desired S3 bucket, -```console +```bash $ echo -n '' > AWS_ACCESS_KEY_ID $ echo -n '' > AWS_SECRET_ACCESS_KEY $ kubectl create secret generic -n demo s3-secret \ @@ -222,7 +222,7 @@ spec: Let's create the `BackupStorage` we have shown above, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/kubestash/logical/replicaset/examples/backupstorage-replicaset.yaml backupstorage.storage.kubestash.com/s3-storage-replicaset created ``` @@ -237,7 +237,7 @@ We have to create a `BackupConfiguration` targeting respective MongoDB crd of ou EncryptionSecret refers to the Secret containing the encryption key which will be used to encode/decode the backed up data. Let's create a secret called `encry-secret` -```console +```bash $ kubectl create secret generic encry-secret -n demo \ --from-literal=RESTIC_PASSWORD='123' -n demo secret/encry-secret created @@ -264,7 +264,7 @@ spec: Let's create the RetentionPolicy we have shown above, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/kubestash/logical/replicaset/examples/retentionpolicy.yaml retentionpolicy.storage.kubestash.com/backup-rp created ``` @@ -324,7 +324,7 @@ Here, Let's create the `BackupConfiguration` crd we have shown above, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/kubestash/logical/replicaset/examples/backupconfiguration-replicaset.yaml backupconfiguration.core.kubestash.com/mg created ``` @@ -333,7 +333,7 @@ backupconfiguration.core.kubestash.com/mg created If everything goes well, the phase of the `BackupConfiguration` should be `Ready`. The `Ready` phase indicates that the backup setup is successful. Let's verify the `Phase` of the BackupConfiguration, -```console +```bash $ kubectl get backupconfiguration -n demo NAME PHASE PAUSED AGE mg Ready 85s @@ -345,7 +345,7 @@ KubeStash will create a CronJob with the schedule specified in `spec.sessions.sc Verify that the CronJob has been created using the following command, -```console +```bash $ kubectl get cronjob -n demo NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE trigger-mg-frequent */3 * * * * False 0 101s @@ -357,7 +357,7 @@ The `trigger-mg-frequent` CronJob will trigger a backup on each schedule by crea Wait for the next schedule. Run the following command to watch `BackupSession` crd, -```console +```bash $ kubectl get backupsession -n demo NAME INVOKER-TYPE INVOKER-NAME PHASE DURATION AGE mg-frequent-1701940862 BackupConfiguration mg Succeeded 3m16s @@ -370,7 +370,7 @@ We can see above that the backup session has succeeded. Now, we are going to ver Once a backup is complete, KubeStash will update the respective `Snapshot` crd to reflect the backup. It will be created when a backup is triggered. Check that the `Snapshot` Phase to verify backup. -```console +```bash $ kubectl get snapshot -n demo NAME REPOSITORY SESSION SNAPSHOT-TIME DELETION-POLICY PHASE VERIFICATION-STATUS AGE s3-repo-mg-frequent-1701940862 s3-repo frequent 2023-12-07T09:21:07Z Delete Succeeded 3m53s @@ -379,7 +379,7 @@ s3-repo-mg-frequent-1701941042 s3-repo frequent 2023-12-07T09:24:08Z KubeStash will also update the respective `Repository` crd to reflect the backup. Check that the repository `s3-repo` has been updated by the following command, -```console +```bash $ kubectl get repository -n demo s3-repo NAME INTEGRITY SNAPSHOT-COUNT SIZE PHASE LAST-SUCCESSFUL-BACKUP AGE s3-repo true 2 2.883 KiB Ready 55s 8m5s @@ -404,7 +404,7 @@ backupconfiguration.core.kubestash.com/mg patched Now, wait for a moment. KubeStash will pause the BackupConfiguration. Verify that the BackupConfiguration has been paused, -```console +```bash $ kubectl get backupconfiguration -n demo mg NAME PHASE PAUSED AGE mg Ready true 11m @@ -441,14 +441,14 @@ spec: Create the above `MongoDB` crd, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/kubestash/logical/replicaset/examples/mongodb-replicaset-restore.yaml mongodb.kubedb.com/sample-mg-rs-restore created ``` Let's check if the database is ready to use, -```console +```bash $ kubectl get mg -n demo sample-mg-rs-restore NAME VERSION STATUS AGE sample-mg-rs-restore 4.2.24 Ready 2m45s @@ -456,7 +456,7 @@ sample-mg-rs-restore 4.2.24 Ready 2m45s Let's verify all the databases of this `sample-mg-rs-restore` by exec into its pod -```console +```bash $ export USER=$(kubectl get secrets -n demo sample-mg-rs-restore-auth -o jsonpath='{.data.\username}' | base64 -d) $ export PASSWORD=$(kubectl get secrets -n demo sample-mg-rs-restore-auth -o jsonpath='{.data.\password}' | base64 -d) @@ -530,7 +530,7 @@ Here, Let's create the `RestoreSession` crd we have shown above, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/kubestash/logical/replicaset/examples/restoresession-replicaset.yaml restoresession.core.kubestash.com/mg-rs-restore created ``` @@ -539,7 +539,7 @@ Once, you have created the `RestoreSession` crd, KubeStash will create a job to Run the following command to watch `RestoreSession` phase, -```console +```bash $ kubectl get restoresession -n demo mg-rs-restore -w NAME REPOSITORY FAILURE-POLICY PHASE DURATION AGE mg-rs-restore s3-repo Succeeded 9s 34s @@ -553,7 +553,7 @@ In this section, we are going to verify that the desired data has been restored Lets, exec into the database pod and list available tables, -```console +```bash $ kubectl exec -it -n demo sample-mg-rs-restore-0 -- mongo admin -u $USER -p $PASSWORD rs0:PRIMARY> show dbs @@ -597,7 +597,7 @@ So, from the above output, we can see the database `newdb` that we had created e To cleanup the Kubernetes resources created by this tutorial, run: -```console +```bash kubectl delete -n demo restoresession mg-rs-restore kubectl delete -n demo backupconfiguration mg kubectl delete -n demo mg sample-mg-rs diff --git a/content/docs/v2024.9.30/guides/mongodb/backup/kubestash/logical/sharding/index.md b/content/docs/v2024.9.30/guides/mongodb/backup/kubestash/logical/sharding/index.md index 6e59d6e138..60cd31a68f 100644 --- a/content/docs/v2024.9.30/guides/mongodb/backup/kubestash/logical/sharding/index.md +++ b/content/docs/v2024.9.30/guides/mongodb/backup/kubestash/logical/sharding/index.md @@ -44,7 +44,7 @@ You have to be familiar with following custom resources: To keep things isolated, we are going to use a separate namespace called `demo` throughout this tutorial. Create `demo` namespace if you haven't created yet. -```console +```bash $ kubectl create ns demo namespace/demo created ``` @@ -93,7 +93,7 @@ spec: Create the above `MongoDB` crd, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/kubestash/logical/sharding/examples/mongodb-sharding.yaml mongodb.kubedb.com/sample-mg-sh created ``` @@ -102,7 +102,7 @@ KubeDB will deploy a MongoDB database according to the above specification. It w Let's check if the database is ready to use, -```console +```bash $ kubectl get mongodb -n demo sample-mg-sh NAME VERSION STATUS AGE sample-mg-sh 4.2.24 Ready 5m39s @@ -110,7 +110,7 @@ sample-mg-sh 4.2.24 Ready 5m39s The database is `Ready`. Verify that KubeDB has created a Secret and a Service for this database using the following commands, -```console +```bash $ kubectl get secret -n demo -l=app.kubernetes.io/instance=sample-mg-sh NAME TYPE DATA AGE sample-mg-sh-auth kubernetes.io/basic-auth 2 21m @@ -134,7 +134,7 @@ Here, we have to use service `sample-mg-sh` and secret `sample-mg-sh-auth` to co For simplicity, we are going to exec into the database pod and create some sample data. At first, find out the database mongos pod using the following command, -```console +```bash $ kubectl get pods -n demo --selector="mongodb.kubedb.com/node.mongos=sample-mg-sh-mongos" NAME READY STATUS RESTARTS AGE sample-mg-sh-mongos-0 1/1 Running 0 21m @@ -143,7 +143,7 @@ sample-mg-sh-mongos-1 1/1 Running 0 21m Now, let's exec into the pod and create a table, -```console +```bash $ export USER=$(kubectl get secrets -n demo sample-mg-sh-auth -o jsonpath='{.data.\username}' | base64 -d) $ export PASSWORD=$(kubectl get secrets -n demo sample-mg-sh-auth -o jsonpath='{.data.\password}' | base64 -d) @@ -194,7 +194,7 @@ We are going to store our backed up data into a S3 bucket. At first, we need to Let's create a secret called `s3-secret` with access credentials to our desired S3 bucket, -```console +```bash $ echo -n '' > AWS_ACCESS_KEY_ID $ echo -n '' > AWS_SECRET_ACCESS_KEY $ kubectl create secret generic -n demo s3-secret \ @@ -230,7 +230,7 @@ spec: Let's create the `BackupStorage` we have shown above, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/kubestash/logical/sharding/examples/backupstorage-sharding.yaml backupstorage.storage.kubestash.com/s3-storage-sharding created ``` @@ -245,7 +245,7 @@ We have to create a `BackupConfiguration` targeting respective MongoDB crd of ou EncryptionSecret refers to the Secret containing the encryption key which will be used to encode/decode the backed up data. Let's create a secret called `encry-secret` -```console +```bash $ kubectl create secret generic encry-secret -n demo \ --from-literal=RESTIC_PASSWORD='123' -n demo secret/encry-secret created @@ -255,7 +255,7 @@ secret/encry-secret created `RetentionPolicy` specifies how the old Snapshots should be cleaned up. This is a namespaced CRD.However, we can refer it from other namespaces as long as it is permitted via `.spec.usagePolicy`. Below is the YAML of the `RetentionPolicy` called `backup-rp` -```console +```bash apiVersion: storage.kubestash.com/v1alpha1 kind: RetentionPolicy metadata: @@ -272,7 +272,7 @@ spec: Let's create the RetentionPolicy we have shown above, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/kubestash/logical/sharding/examples/retentionpolicy.yaml retentionpolicy.storage.kubestash.com/backup-rp created ``` @@ -332,7 +332,7 @@ Here, Let's create the `BackupConfiguration` crd we have shown above, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/kubestash/logical/sharding/examples/backupconfiguration-sharding.yaml backupconfiguration.core.kubestash.com/mg created ``` @@ -341,7 +341,7 @@ backupconfiguration.core.kubestash.com/mg created If everything goes well, the phase of the `BackupConfiguration` should be `Ready`. The `Ready` phase indicates that the backup setup is successful. Let's verify the `Phase` of the BackupConfiguration, -```console +```bash $ kubectl get backupconfiguration -n demo NAME PHASE PAUSED AGE mg Ready 85s @@ -353,7 +353,7 @@ KubeStash will create a CronJob with the schedule specified in `spec.sessions.sc Verify that the CronJob has been created using the following command, -```console +```bash $ kubectl get cronjob -n demo NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE trigger-mg-frequent */3 * * * * False 0 101s @@ -365,7 +365,7 @@ The `trigger-mg-frequent` CronJob will trigger a backup on each schedule by crea Wait for the next schedule. Run the following command to watch `BackupSession` crd, -```console +```bash $ kubectl get backupsession -n demo NAME INVOKER-TYPE INVOKER-NAME PHASE DURATION AGE mg-frequent-1701950402 BackupConfiguration mg Succeeded 3m5s @@ -378,7 +378,7 @@ We can see above that the backup session has succeeded. Now, we are going to ver Once a backup is complete, KubeStash will update the respective `Snapshot` crd to reflect the backup. It will be created when a backup is triggered. Check that the `Snapshot` Phase to verify backup. -```console +```bash $ kubectl get snapshot -n demo NAME REPOSITORY SESSION SNAPSHOT-TIME DELETION-POLICY PHASE VERIFICATION-STATUS AGE s3-repo-mg-frequent-1701950402 s3-repo frequent 2023-12-07T12:00:11Z Delete Succeeded 3m37s @@ -387,7 +387,7 @@ s3-repo-mg-frequent-1701950582 s3-repo frequent 2023-12-07T12:03:08Z KubeStash will also update the respective `Repository` crd to reflect the backup. Check that the repository `s3-repo` has been updated by the following command, -```console +```bash $ kubectl get repository -n demo s3-repo NAME INTEGRITY SNAPSHOT-COUNT SIZE PHASE LAST-SUCCESSFUL-BACKUP AGE s3-repo true 2 95.660 KiB Ready 41s 4m3s @@ -412,7 +412,7 @@ backupconfiguration.core.kubestash.com/mg patched Now, wait for a moment. KubeStash will pause the BackupConfiguration. Verify that the BackupConfiguration has been paused, -```console +```bash $ kubectl get backupconfiguration -n demo mg NAME PHASE PAUSED AGE mg Ready true 11m @@ -457,14 +457,14 @@ spec: Create the above `MongoDB` crd, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/kubestash/logical/sharding/examples/mongodb-sharding-restore.yaml mongodb.kubedb.com/sample-mg-sh-restore created ``` Let's check if the database is ready to use, -```console +```bash $ kubectl get mg -n demo sample-mg-sh-restore NAME VERSION STATUS AGE sample-mg-sh-restore 4.2.24 Ready 7m47s @@ -472,7 +472,7 @@ sample-mg-sh-restore 4.2.24 Ready 7m47s Let's verify all the databases of this `sample-mg-sh-restore` by exec into its mongos pod -```console +```bash $ export USER=$(kubectl get secrets -n demo sample-mg-sh-restore-auth -o jsonpath='{.data.\username}' | base64 -d) $ export PASSWORD=$(kubectl get secrets -n demo sample-mg-sh-restore-auth -o jsonpath='{.data.\password}' | base64 -d) @@ -545,7 +545,7 @@ Here, Let's create the `RestoreSession` crd we have shown above, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/kubestash/logical/sharding/examples/restoresession-sharding.yaml restoresession.core.kubestash.com/mg-sh-restore created ``` @@ -554,7 +554,7 @@ Once, you have created the `RestoreSession` crd, KubeStash will create a job to Run the following command to watch `RestoreSession` phase, -```console +```bash $ kubectl get restoresession -n demo mg-sh-restore -w NAME REPOSITORY FAILURE-POLICY PHASE DURATION AGE mg-sh-restore s3-repo Succeeded 15s 48s @@ -568,7 +568,7 @@ In this section, we are going to verify that the desired data has been restored Lets, exec into the database's mongos pod and list available tables, -```console +```bash $ kubectl exec -it -n demo sample-mg-sh-restore-mongos-0 -- mongo admin -u $USER -p $PASSWORD mongos> show dbs @@ -611,7 +611,7 @@ So, from the above output, we can see the database `newdb` that we had created e To cleanup the Kubernetes resources created by this tutorial, run: -```console +```bash kubectl delete -n demo restoresession mg-sh-restore kubectl delete -n demo backupconfiguration mg kubectl delete -n demo mg sample-mg-sh diff --git a/content/docs/v2024.9.30/guides/mongodb/backup/kubestash/logical/standalone/index.md b/content/docs/v2024.9.30/guides/mongodb/backup/kubestash/logical/standalone/index.md index 3d0c9cc76d..d7d6b62bc9 100644 --- a/content/docs/v2024.9.30/guides/mongodb/backup/kubestash/logical/standalone/index.md +++ b/content/docs/v2024.9.30/guides/mongodb/backup/kubestash/logical/standalone/index.md @@ -44,7 +44,7 @@ You have to be familiar with following custom resources: To keep things isolated, we are going to use a separate namespace called `demo` throughout this tutorial. Create `demo` namespace if you haven't created yet. -```console +```bash $ kubectl create ns demo namespace/demo created ``` @@ -82,7 +82,7 @@ spec: Create the above `MongoDB` crd, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/kubestash/logical/standalone/examples/mongodb.yaml mongodb.kubedb.com/sample-mongodb created ``` @@ -91,7 +91,7 @@ KubeDB will deploy a MongoDB database according to the above specification. It w Let's check if the database is ready to use, -```console +```bash $ kubectl get mg -n demo sample-mongodb NAME VERSION STATUS AGE sample-mongodb 4.2.24 Ready 2m9s @@ -99,7 +99,7 @@ sample-mongodb 4.2.24 Ready 2m9s The database is `Ready`. Verify that KubeDB has created a Secret and a Service for this database using the following commands, -```console +```bash $ kubectl get secret -n demo -l=app.kubernetes.io/instance=sample-mongodb NAME TYPE DATA AGE sample-mongodb-auth Opaque 2 2m28s @@ -118,7 +118,7 @@ Here, we have to use service `sample-mongodb` and secret `sample-mongodb-auth` t For simplicity, we are going to exec into the database pod and create some sample data. At first, find out the database pod using the following command, -```console +```bash $ kubectl get pods -n demo --selector="app.kubernetes.io/instance=sample-mongodb" NAME READY STATUS RESTARTS AGE sample-mongodb-0 1/1 Running 0 12m @@ -126,7 +126,7 @@ sample-mongodb-0 1/1 Running 0 12m Now, let's exec into the pod and create a table, -```console +```bash $ export USER=$(kubectl get secrets -n demo sample-mongodb-auth -o jsonpath='{.data.\username}' | base64 -d) $ export PASSWORD=$(kubectl get secrets -n demo sample-mongodb-auth -o jsonpath='{.data.\password}' | base64 -d) @@ -180,7 +180,7 @@ We are going to store our backed up data into a S3 bucket. At first, we need to Let's create a secret called `s3-secret` with access credentials to our desired S3 bucket, -```console +```bash $ echo -n '' > AWS_ACCESS_KEY_ID $ echo -n '' > AWS_SECRET_ACCESS_KEY $ kubectl create secret generic -n demo s3-secret \ @@ -216,7 +216,7 @@ spec: Let's create the `BackupStorage` we have shown above, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/kubestash/logical/standalone/examples/backupstorage.yaml storage.kubestash.com/s3-storage created ``` @@ -231,7 +231,7 @@ We have to create a `BackupConfiguration` targeting respective MongoDB crd of ou EncryptionSecret refers to the Secret containing the encryption key which will be used to encode/decode the backed up data. Let's create a secret called `encry-secret` -```console +```bash $ kubectl create secret generic encry-secret -n demo \ --from-literal=RESTIC_PASSWORD='123' -n demo secret/encry-secret created @@ -258,7 +258,7 @@ spec: Let's create the RetentionPolicy we have shown above, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/kubestash/logical/standalone/examples/retentionpolicy.yaml retentionpolicy.storage.kubestash.com/backup-rp created ``` @@ -318,7 +318,7 @@ Here, Let's create the `BackupConfiguration` crd we have shown above, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/kubestash/logical/standalone/examples/backupconfiguration.yaml backupconfiguration.core.kubestash.com/mg created ``` @@ -327,7 +327,7 @@ backupconfiguration.core.kubestash.com/mg created If everything goes well, the phase of the `BackupConfiguration` should be `Ready`. The `Ready` phase indicates that the backup setup is successful. Let's verify the `Phase` of the BackupConfiguration, -```console +```bash $ kubectl get backupconfiguration -n demo NAME PHASE PAUSED AGE mg Ready 85s @@ -339,7 +339,7 @@ KubeStash will create a CronJob with the schedule specified in `spec.sessions.sc Verify that the CronJob has been created using the following command, -```console +```bash $ kubectl get cronjob -n demo NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE trigger-mg-frequent */3 * * * * False 0 101s @@ -351,7 +351,7 @@ The `trigger-mg-frequent` CronJob will trigger a backup on each schedule by crea Wait for the next schedule. Run the following command to watch `BackupSession` crd, -```console +```bash $ kubectl get backupsession -n demo NAME INVOKER-TYPE INVOKER-NAME PHASE DURATION AGE mg-frequent-1701923402 BackupConfiguration mg Succeeded 3m4s @@ -364,7 +364,7 @@ We can see above that the backup session has succeeded. Now, we are going to ver Once a backup is complete, KubeStash will update the respective `Snapshot` crd to reflect the backup. It will be created when a backup is triggered. Check that the `Snapshot` Phase to verify backup. -```console +```bash $ kubectl get snapshot -n demo NAME REPOSITORY SESSION SNAPSHOT-TIME DELETION-POLICY PHASE VERIFICATION-STATUS AGE s3-repo-mg-frequent-1701923402 s3-repo frequent 2023-12-07T04:30:10Z Delete Succeeded 3m25s @@ -374,7 +374,7 @@ s3-repo-mg-frequent-1701923582 s3-repo frequent 2023-12-07T04:33:06Z KubeStash will also update the respective `Repository` crd to reflect the backup. Check that the repository `s3-repo` has been updated by the following command, -```console +```bash $ kubectl get repository -n demo s3-repo NAME INTEGRITY SNAPSHOT-COUNT SIZE PHASE LAST-SUCCESSFUL-BACKUP AGE s3-repo true 2 2.613 KiB Ready 2m42s 8m38s @@ -399,7 +399,7 @@ backupconfiguration.core.kubestash.com/mg patched Now, wait for a moment. KubeStash will pause the BackupConfiguration. Verify that the BackupConfiguration has been paused, -```console +```bash $ kubectl get backupconfiguration -n demo mg NAME PHASE PAUSED AGE mg Ready true 26m @@ -434,14 +434,14 @@ spec: Create the above `MongoDB` crd, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/kubestash/logical/standalone/examples/mongodb-restore.yaml mongodb.kubedb.com/restore-mongodb created ``` Let's check if the database is ready to use, -```console +```bash $ kubectl get mg -n demo restore-mongodb NAME VERSION STATUS AGE restore-mongodb 4.2.24 Ready 3m30s @@ -449,7 +449,7 @@ restore-mongodb 4.2.24 Ready 3m30s Let's verify all the databases of this `restore-mongodb` by exec into its pod -```console +```bash $ export USER=$(kubectl get secrets -n demo restore-mongodb-auth -o jsonpath='{.data.\username}' | base64 -d) $ export PASSWORD=$(kubectl get secrets -n demo restore-mongodb-auth -o jsonpath='{.data.\password}' | base64 -d) @@ -523,7 +523,7 @@ Here, Let's create the `RestoreSession` crd we have shown above, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/kubestash/logical/standalone/examples/restoresession.yaml restoresession.core.kubestash.com/mg-restore created ``` @@ -532,7 +532,7 @@ Once, you have created the `RestoreSession` crd, KubeStash will create a job to Run the following command to watch `RestoreSession` phase, -```console +```bash $ kubectl get restoresession -n demo sample-mongodb-restore -w NAME REPOSITORY FAILURE-POLICY PHASE DURATION AGE mg-restore s3-repo Succeeded 8s 49s @@ -546,7 +546,7 @@ In this section, we are going to verify that the desired data has been restored Lets, exec into the database pod and list available tables, -```console +```bash $ kubectl exec -it -n demo restore-mongodb-0 -- mongo admin -u $USER -p $PASSWORD > show dbs @@ -590,7 +590,7 @@ So, from the above output, we can see the database `newdb` that we had created e To cleanup the Kubernetes resources created by this tutorial, run: -```console +```bash kubectl delete -n demo restoresession mg-restore kubectl delete -n demo backupconfiguration mg kubectl delete -n demo mg sample-mongodb diff --git a/content/docs/v2024.9.30/guides/mongodb/backup/stash/logical/replicaset/index.md b/content/docs/v2024.9.30/guides/mongodb/backup/stash/logical/replicaset/index.md index ec825efc41..e39097065c 100644 --- a/content/docs/v2024.9.30/guides/mongodb/backup/stash/logical/replicaset/index.md +++ b/content/docs/v2024.9.30/guides/mongodb/backup/stash/logical/replicaset/index.md @@ -44,7 +44,7 @@ You have to be familiar with following custom resources: To keep things isolated, we are going to use a separate namespace called `demo` throughout this tutorial. Create `demo` namespace if you haven't created yet. -```console +```bash $ kubectl create ns demo namespace/demo created ``` @@ -84,7 +84,7 @@ spec: Create the above `MongoDB` crd, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/replicaset/examples/mongodb-replicaset.yaml mongodb.kubedb.com/sample-mgo-rs created ``` @@ -93,7 +93,7 @@ KubeDB will deploy a MongoDB database according to the above specification. It w Let's check if the database is ready to use, -```console +```bash $ kubectl get mg -n demo sample-mgo-rs NAME VERSION STATUS AGE sample-mgo-rs 4.4.26 Ready 1m @@ -101,7 +101,7 @@ sample-mgo-rs 4.4.26 Ready 1m The database is `Running`. Verify that KubeDB has created a Secret and a Service for this database using the following commands, -```console +```bash $ kubectl get secret -n demo -l=app.kubernetes.io/instance=sample-mgo-rs NAME TYPE DATA AGE sample-mgo-rs-auth Opaque 2 117s @@ -119,7 +119,7 @@ KubeDB creates an [AppBinding](/docs/v2024.9.30/guides/mongodb/concepts/appbindi Verify that the `AppBinding` has been created successfully using the following command, -```console +```bash $ kubectl get appbindings -n demo NAME AGE sample-mgo-rs 58s @@ -127,7 +127,7 @@ sample-mgo-rs 58s Let's check the YAML of the above `AppBinding`, -```console +```bash $ kubectl get appbindings -n demo sample-mgo-rs -o yaml ``` @@ -198,7 +198,7 @@ Stash uses the `AppBinding` crd to connect with the target database. It requires Now, we are going to exec into the database pod and create some sample data. At first, find out the database pod using the following command, -```console +```bash $ kubectl get pods -n demo --selector="app.kubernetes.io/instance=sample-mgo-rs" NAME READY STATUS RESTARTS AGE sample-mgo-rs-0 1/1 Running 0 16m @@ -208,7 +208,7 @@ sample-mgo-rs-2 1/1 Running 0 15m Now, let's exec into the pod and create a table, -```console +```bash $ export USER=$(kubectl get secrets -n demo sample-mgo-rs-auth -o jsonpath='{.data.\username}' | base64 -d) $ export PASSWORD=$(kubectl get secrets -n demo sample-mgo-rs-auth -o jsonpath='{.data.\password}' | base64 -d) @@ -260,7 +260,7 @@ We are going to store our backed up data into a GCS bucket. At first, we need to Let's create a secret called `gcs-secret` with access credentials to our desired GCS bucket, -```console +```bash $ echo -n 'changeit' > RESTIC_PASSWORD $ echo -n '' > GOOGLE_PROJECT_ID $ cat downloaded-sa-key.json > GOOGLE_SERVICE_ACCOUNT_JSON_KEY @@ -291,7 +291,7 @@ spec: Let's create the `Repository` we have shown above, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/replicaset/examples/repository-replicaset.yaml repository.stash.appscode.com/gcs-repo-replicaset created ``` @@ -334,7 +334,7 @@ Here, Let's create the `BackupConfiguration` crd we have shown above, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/replicaset/examples/backupconfiguration-replicaset.yaml backupconfiguration.stash.appscode.com/sample-mgo-rs-backup created ``` @@ -343,7 +343,7 @@ backupconfiguration.stash.appscode.com/sample-mgo-rs-backup created If everything goes well, the phase of the `BackupConfiguration` should be `Ready`. The `Ready` phase indicates that the backup setup is successful. Let's verify the `Phase` of the BackupConfiguration, -```console +```bash $ kubectl get backupconfiguration -n demo NAME TASK SCHEDULE PAUSED PHASE AGE sample-mgo-rs-backup mongodb-backup-4.4.6 */5 * * * * Ready 11s @@ -355,7 +355,7 @@ Stash will create a CronJob with the schedule specified in `spec.schedule` field Verify that the CronJob has been created using the following command, -```console +```bash $ kubectl get cronjob -n demo NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE sample-mgo-rs-backup */5 * * * * False 0 62s @@ -367,7 +367,7 @@ The `sample-mgo-rs-backup` CronJob will trigger a backup on each schedule by cre Wait for the next schedule. Run the following command to watch `BackupSession` crd, -```console +```bash $ kubectl get backupsession -n demo -w NAME INVOKER-TYPE INVOKER-NAME PHASE AGE sample-mgo-rs-backup-1563540308 BackupConfiguration sample-mgo-rs-backup Running 5m19s @@ -380,7 +380,7 @@ We can see above that the backup session has succeeded. Now, we are going to ver Once a backup is complete, Stash will update the respective `Repository` crd to reflect the backup. Check that the repository `gcs-repo-replicaset` has been updated by the following command, -```console +```bash $ kubectl get repository -n demo gcs-repo-replicaset NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE gcs-repo-replicaset true 3.844 KiB 2 14s 10m @@ -410,7 +410,7 @@ BackupConfiguration demo/sample-mgo-rs-backup has been paused successfu Now, wait for a moment. Stash will pause the BackupConfiguration. Verify that the BackupConfiguration has been paused, -```console +```bash $ kubectl get backupconfiguration -n demo sample-mgo-rs-backup NAME TASK SCHEDULE PAUSED PHASE AGE sample-mgo-rs-backup mongodb-backup-4.4.6 */5 * * * * true Ready 26m @@ -421,7 +421,7 @@ Notice the `PAUSED` column. Value `true` for this field means that the BackupCon #### Simulate Disaster Now, let’s simulate an accidental deletion scenario. Here, we are going to exec into the database pod and delete the `newdb` database we had created earlier. -```console +```bash $ kubectl exec -it -n demo sample-mgo-rs-0 -- mongo admin -u $USER -p $PASSWORD rs0:PRIMARY> rs.isMaster().primary @@ -471,7 +471,7 @@ Here, Let's create the `RestoreSession` crd we have shown above, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/replicaset/examples/estoresession-replicaset.yaml restoresession.stash.appscode.com/sample-mgo-rs-restore created ``` @@ -480,7 +480,7 @@ Once, you have created the `RestoreSession` crd, Stash will create a job to rest Run the following command to watch `RestoreSession` phase, -```console +```bash $ kubectl get restoresession -n demo sample-mgo-rs-restore -w NAME REPOSITORY-NAME PHASE AGE sample-mgo-rs-restore gcs-repo-replicaset Running 5s @@ -497,7 +497,7 @@ In this section, we are going to verify that the desired data has been restored Lets, exec into the database pod and list available tables, -```console +```bash $ kubectl exec -it -n demo sample-mgo-rs-0 -- mongo admin -u $USER -p $PASSWORD rs0:PRIMARY> rs.isMaster().primary @@ -597,7 +597,7 @@ spec: This time, we have to provide the Stash Addon information in `spec.task` section of `BackupConfiguration` object as it does not present in the `AppBinding` object that we are creating manually. -```console +```bash $ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/replicaset/examples/standalone-backup.yaml appbinding.appcatalog.appscode.com/sample-mgo-rs-custom created repository.stash.appscode.com/gcs-repo-custom created @@ -663,7 +663,7 @@ spec: - snapshots: [latest] ``` -```console +```bash $ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/replicaset/rexamples/estored-standalone.yaml mongodb.kubedb.com/restored-mongodb created @@ -681,7 +681,7 @@ restored-mongodb 4.4.26 Ready 2m Now, exec into the database pod and list available tables, -```console +```bash $ export USER=$(kubectl get secrets -n demo restored-mongodb-auth -o jsonpath='{.data.\username}' | base64 -d) $ export PASSWORD=$(kubectl get secrets -n demo restored-mongodb-auth -o jsonpath='{.data.\password}' | base64 -d) @@ -726,7 +726,7 @@ So, from the above output, we can see the database `newdb` that we had created i To cleanup the Kubernetes resources created by this tutorial, run: -```console +```bash kubectl delete -n demo restoresession sample-mgo-rs-restore sample-mongodb-restore kubectl delete -n demo backupconfiguration sample-mgo-rs-backup sample-mgo-rs-backup2 kubectl delete -n demo mg sample-mgo-rs restored-mongodb diff --git a/content/docs/v2024.9.30/guides/mongodb/backup/stash/logical/sharding/index.md b/content/docs/v2024.9.30/guides/mongodb/backup/stash/logical/sharding/index.md index 58f4c35e2f..729161df1d 100644 --- a/content/docs/v2024.9.30/guides/mongodb/backup/stash/logical/sharding/index.md +++ b/content/docs/v2024.9.30/guides/mongodb/backup/stash/logical/sharding/index.md @@ -44,7 +44,7 @@ You have to be familiar with following custom resources: To keep things isolated, we are going to use a separate namespace called `demo` throughout this tutorial. Create `demo` namespace if you haven't created yet. -```console +```bash $ kubectl create ns demo namespace/demo created ``` @@ -92,7 +92,7 @@ spec: Create the above `MongoDB` crd, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/sharding/examples/mongodb-sharding.yaml mongodb.kubedb.com/sample-mgo-sh created ``` @@ -101,7 +101,7 @@ KubeDB will deploy a MongoDB database according to the above specification. It w Let's check if the database is ready to use, -```console +```bash $ kubectl get mg -n demo sample-mgo-sh NAME VERSION STATUS AGE sample-mgo-sh 4.4.26 Ready 35m @@ -109,7 +109,7 @@ sample-mgo-sh 4.4.26 Ready 35m The database is `Ready`. Verify that KubeDB has created a Secret and a Service for this database using the following commands, -```console +```bash $ kubectl get secret -n demo -l=app.kubernetes.io/instance=sample-mgo-sh NAME TYPE DATA AGE sample-mgo-sh-auth Opaque 2 36m @@ -130,7 +130,7 @@ KubeDB creates an [AppBinding](/docs/v2024.9.30/guides/mongodb/concepts/appbindi Verify that the `AppBinding` has been created successfully using the following command, -```console +```bash $ kubectl get appbindings -n demo NAME AGE sample-mgo-sh 30m @@ -138,7 +138,7 @@ sample-mgo-sh 30m Let's check the YAML of the above `AppBinding`, -```console +```bash $ kubectl get appbindings -n demo sample-mgo-sh -o yaml ``` @@ -213,7 +213,7 @@ Stash uses the `AppBinding` crd to connect with the target database. It requires Now, we are going to exec into the database pod and create some sample data. At first, find out the database pod using the following command, -```console +```bash $ kubectl get pods -n demo --selector="mongodb.kubedb.com/node.mongos=sample-mgo-sh-mongos" NAME READY STATUS RESTARTS AGE sample-mgo-sh-mongos-9459cfc44-4jthd 1/1 Running 0 60m @@ -222,7 +222,7 @@ sample-mgo-sh-mongos-9459cfc44-6d2st 1/1 Running 0 60m Now, let's exec into the pod and create a table, -```console +```bash $ export USER=$(kubectl get secrets -n demo sample-mgo-sh-auth -o jsonpath='{.data.\username}' | base64 -d) $ export PASSWORD=$(kubectl get secrets -n demo sample-mgo-sh-auth -o jsonpath='{.data.\password}' | base64 -d) @@ -271,7 +271,7 @@ We are going to store our backed up data into a GCS bucket. At first, we need to Let's create a secret called `gcs-secret` with access credentials to our desired GCS bucket, -```console +```bash $ echo -n 'changeit' > RESTIC_PASSWORD $ echo -n '' > GOOGLE_PROJECT_ID $ cat downloaded-sa-key.json > GOOGLE_SERVICE_ACCOUNT_JSON_KEY @@ -302,7 +302,7 @@ spec: Let's create the `Repository` we have shown above, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/sharding/examples/repository-sharding.yaml repository.stash.appscode.com/gcs-repo-sharding created ``` @@ -345,7 +345,7 @@ Here, Let's create the `BackupConfiguration` crd we have shown above, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/sharding/examples/backupconfiguration-sharding.yaml backupconfiguration.stash.appscode.com/sample-mgo-sh-backup created ``` @@ -354,7 +354,7 @@ backupconfiguration.stash.appscode.com/sample-mgo-sh-backup created If everything goes well, the phase of the `BackupConfiguration` should be `Ready`. The `Ready` phase indicates that the backup setup is successful. Let's verify the `Phase` of the BackupConfiguration, -```console +```bash $ kubectl get backupconfiguration -n demo NAME TASK SCHEDULE PAUSED PHASE AGE sample-mgo-sh-backup mongodb-backup-4.4.6 */5 * * * * Ready 11s @@ -366,7 +366,7 @@ Stash will create a CronJob with the schedule specified in `spec.schedule` field Verify that the CronJob has been created using the following command, -```console +```bash $ kubectl get cronjob -n demo NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE sample-mgo-sh-backup */5 * * * * False 0 13s @@ -378,7 +378,7 @@ The `sample-mgo-sh-backup` CronJob will trigger a backup on each schedule by cre Wait for the next schedule. Run the following command to watch `BackupSession` crd, -```console +```bash $ kubectl get backupsession -n demo -w NAME INVOKER-TYPE INVOKER-NAME PHASE AGE sample-mgo-sh-backup-1563512707 BackupConfiguration sample-mgo-sh-backup Running 5m19s @@ -391,7 +391,7 @@ We can see above that the backup session has succeeded. Now, we are going to ver Once a backup is complete, Stash will update the respective `Repository` crd to reflect the backup. Check that the repository `gcs-repo-sharding` has been updated by the following command, -```console +```bash $ kubectl get repository -n demo gcs-repo-sharding NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE gcs-repo-sharding true 66.453 KiB 12 1m 20m @@ -423,7 +423,7 @@ BackupConfiguration demo/sample-mgo-sh-backup has been paused successfully. Now, wait for a moment. Stash will pause the BackupConfiguration. Verify that the BackupConfiguration has been paused, -```console +```bash $ kubectl get backupconfiguration -n demo sample-mgo-sh-backup NAME TASK SCHEDULE PAUSED PHASE AGE sample-mgo-sh-backup mongodb-restore-4.4.6 */5 * * * * true Ready 26m @@ -434,7 +434,7 @@ Notice the `PAUSED` column. Value `true` for this field means that the BackupCon #### Simulate Disaster: Now, let’s simulate an accidental deletion scenario. Here, we are going to exec into the database pod and delete the `newdb` database we had created earlier. -```console +```bash $ kubectl exec -it -n demo sample-mgo-sh-mongos-9459cfc44-4jthd -- mongo admin -u $USER -p $PASSWORD mongos> use newdb @@ -484,7 +484,7 @@ Here, Let's create the `RestoreSession` crd we have shown above, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/sharding/examples/restoresession-sharding.yaml restoresession.stash.appscode.com/sample-mgo-sh-restore created ``` @@ -493,7 +493,7 @@ Once, you have created the `RestoreSession` crd, Stash will create a job to rest Run the following command to watch `RestoreSession` phase, -```console +```bash $ kubectl get restoresession -n demo sample-mgo-sh-restore -w NAME REPOSITORY-NAME PHASE AGE sample-mgo-sh-restore gcs-repo-sharding Running 5s @@ -508,7 +508,7 @@ In this section, we are going to verify that the desired data has been restored Lets, exec into the database pod and list available tables, -```console +```bash $ kubectl exec -it -n demo sample-mgo-sh-mongos-9459cfc44-4jthd -- mongo admin -u $USER -p $PASSWORD @@ -606,7 +606,7 @@ spec: This time, we have to provide Stash addon info in `spec.task` section of `BackupConfiguration` object as the `AppBinding` we are creating manually does not have those info. -```console +```bash $ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/sharding/examples/standalone-backup.yaml appbinding.appcatalog.appscode.com/sample-mgo-sh-custom created repository.stash.appscode.com/gcs-repo-custom created @@ -672,7 +672,7 @@ spec: - snapshots: [latest] ``` -```console +```bash $ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/sharding/examples/restored-standalone.yaml mongodb.kubedb.com/restored-mongodb created @@ -690,7 +690,7 @@ restored-mongodb 4.4.26 Ready 56s Now, exec into the database pod and list available tables, -```console +```bash $ export USER=$(kubectl get secrets -n demo restored-mongodb-auth -o jsonpath='{.data.\username}' | base64 -d) $ export PASSWORD=$(kubectl get secrets -n demo restored-mongodb-auth -o jsonpath='{.data.\password}' | base64 -d) @@ -733,7 +733,7 @@ So, from the above output, we can see the database `newdb` that we had created i To cleanup the Kubernetes resources created by this tutorial, run: -```console +```bash kubectl delete -n demo restoresession sample-mgo-sh-restore sample-mongodb-restore kubectl delete -n demo backupconfiguration sample-mgo-sh-backup sample-mgo-sh-backup2 kubectl delete -n demo mg sample-mgo-sh restored-mongodb diff --git a/content/docs/v2024.9.30/guides/mongodb/backup/stash/logical/standalone/index.md b/content/docs/v2024.9.30/guides/mongodb/backup/stash/logical/standalone/index.md index 007badeddf..8fb2630606 100644 --- a/content/docs/v2024.9.30/guides/mongodb/backup/stash/logical/standalone/index.md +++ b/content/docs/v2024.9.30/guides/mongodb/backup/stash/logical/standalone/index.md @@ -44,7 +44,7 @@ You have to be familiar with following custom resources: To keep things isolated, we are going to use a separate namespace called `demo` throughout this tutorial. Create `demo` namespace if you haven't created yet. -```console +```bash $ kubectl create ns demo namespace/demo created ``` @@ -82,7 +82,7 @@ spec: Create the above `MongoDB` crd, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/standalone/examples/mongodb.yaml mongodb.kubedb.com/sample-mongodb created ``` @@ -91,7 +91,7 @@ KubeDB will deploy a MongoDB database according to the above specification. It w Let's check if the database is ready to use, -```console +```bash $ kubectl get mg -n demo sample-mongodb NAME VERSION STATUS AGE sample-mongodb 4.4.26 Ready 2m9s @@ -99,7 +99,7 @@ sample-mongodb 4.4.26 Ready 2m9s The database is `Ready`. Verify that KubeDB has created a Secret and a Service for this database using the following commands, -```console +```bash $ kubectl get secret -n demo -l=app.kubernetes.io/instance=sample-mongodb NAME TYPE DATA AGE sample-mongodb-auth Opaque 2 2m28s @@ -116,7 +116,7 @@ Here, we have to use service `sample-mongodb` and secret `sample-mongodb-auth` t Verify that the `AppBinding` has been created successfully using the following command, -```console +```bash $ kubectl get appbindings -n demo NAME AGE sample-mongodb 20m @@ -124,7 +124,7 @@ sample-mongodb 20m Let's check the YAML of the above `AppBinding`, -```console +```bash $ kubectl get appbindings -n demo sample-mongodb -o yaml ``` @@ -192,7 +192,7 @@ Stash uses the `AppBinding` crd to connect with the target database. It requires Now, we are going to exec into the database pod and create some sample data. At first, find out the database pod using the following command, -```console +```bash $ kubectl get pods -n demo --selector="app.kubernetes.io/instance=sample-mongodb" NAME READY STATUS RESTARTS AGE sample-mongodb-0 1/1 Running 0 12m @@ -200,7 +200,7 @@ sample-mongodb-0 1/1 Running 0 12m Now, let's exec into the pod and create a table, -```console +```bash $ export USER=$(kubectl get secrets -n demo sample-mongodb-auth -o jsonpath='{.data.\username}' | base64 -d) $ export PASSWORD=$(kubectl get secrets -n demo sample-mongodb-auth -o jsonpath='{.data.\password}' | base64 -d) @@ -248,7 +248,7 @@ We are going to store our backed up data into a GCS bucket. At first, we need to Let's create a secret called `gcs-secret` with access credentials to our desired GCS bucket, -```console +```bash $ echo -n 'changeit' > RESTIC_PASSWORD $ echo -n '' > GOOGLE_PROJECT_ID $ cat downloaded-sa-key.json > GOOGLE_SERVICE_ACCOUNT_JSON_KEY @@ -279,7 +279,7 @@ spec: Let's create the `Repository` we have shown above, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/standalone/examples/repository.yaml repository.stash.appscode.com/gcs-repo created ``` @@ -322,7 +322,7 @@ Here, Let's create the `BackupConfiguration` crd we have shown above, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/standalone/examples/backupconfiguration.yaml backupconfiguration.stash.appscode.com/sample-mongodb-backup created ``` @@ -331,7 +331,7 @@ backupconfiguration.stash.appscode.com/sample-mongodb-backup created If everything goes well, the phase of the `BackupConfiguration` should be `Ready`. The `Ready` phase indicates that the backup setup is successful. Let's verify the `Phase` of the BackupConfiguration, -```console +```bash $ kubectl get backupconfiguration -n demo NAME TASK SCHEDULE PAUSED PHASE AGE sample-mongodb-backup mongodb-backup-4.4.6 */5 * * * * Ready 11s @@ -343,7 +343,7 @@ Stash will create a CronJob with the schedule specified in `spec.schedule` field Verify that the CronJob has been created using the following command, -```console +```bash $ kubectl get cronjob -n demo NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE sample-mongodb-backup */5 * * * * False 0 61s @@ -355,7 +355,7 @@ The `sample-mongodb-backup` CronJob will trigger a backup on each schedule by cr Wait for the next schedule. Run the following command to watch `BackupSession` crd, -```console +```bash $ kubectl get backupsession -n demo -w NAME INVOKER-TYPE INVOKER-NAME PHASE AGE sample-mongodb-backup-1561974001 BackupConfiguration sample-mongodb-backup Running 5m19s @@ -368,7 +368,7 @@ We can see above that the backup session has succeeded. Now, we are going to ver Once a backup is complete, Stash will update the respective `Repository` crd to reflect the backup. Check that the repository `gcs-repo` has been updated by the following command, -```console +```bash $ kubectl get repository -n demo gcs-repo NAME INTEGRITY SIZE SNAPSHOT-COUNT LAST-SUCCESSFUL-BACKUP AGE gcs-repo true 1.611 KiB 1 33s 33m @@ -399,7 +399,7 @@ BackupConfiguration demo/sample-mongodb-backup has been paused successfully. Now, wait for a moment. Stash will pause the BackupConfiguration. Verify that the BackupConfiguration has been paused, -```console +```bash $ kubectl get backupconfiguration -n demo sample-mongodb-backup NAME TASK SCHEDULE PAUSED PHASE AGE sample-mongodb-backup mongodb-backup-4.4.6 */5 * * * * true Ready 26m @@ -410,7 +410,7 @@ Notice the `PAUSED` column. Value `true` for this field means that the BackupCon #### Simulate Disaster: Now, let’s simulate an accidental deletion scenario. Here, we are going to exec into the database pod and delete the `newdb` database we had created earlier. -```console +```bash $ kubectl exec -it -n demo sample-mongodb-0 -- mongo admin -u $USER -p $PASSWORD > use newdb switched to db newdb @@ -457,7 +457,7 @@ Here, Let's create the `RestoreSession` crd we have shown above, -```console +```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/mongodb/backup/logical/standalone/examples/restoresession.yaml restoresession.stash.appscode.com/sample-mongodb-restore created ``` @@ -466,7 +466,7 @@ Once, you have created the `RestoreSession` crd, Stash will create a job to rest Run the following command to watch `RestoreSession` phase, -```console +```bash $ kubectl get restoresession -n demo sample-mongodb-restore -w NAME REPOSITORY-NAME PHASE AGE sample-mongodb-restore gcs-repo Running 5s @@ -481,7 +481,7 @@ In this section, we are going to verify that the desired data has been restored Lets, exec into the database pod and list available tables, -```console +```bash $ kubectl exec -it -n demo sample-mongodb-0 -- mongo admin -u $USER -p $PASSWORD > show dbs @@ -519,7 +519,7 @@ So, from the above output, we can see the database `newdb` that we had created e To cleanup the Kubernetes resources created by this tutorial, run: -```console +```bash kubectl delete -n demo restoresession sample-mongodb-restore sample-mongo kubectl delete -n demo backupconfiguration sample-mongodb-backup kubectl delete -n demo mg sample-mongodb diff --git a/content/docs/v2024.9.30/guides/mongodb/concepts/appbinding.md b/content/docs/v2024.9.30/guides/mongodb/concepts/appbinding.md index 695a90d1d9..12ad005c39 100644 --- a/content/docs/v2024.9.30/guides/mongodb/concepts/appbinding.md +++ b/content/docs/v2024.9.30/guides/mongodb/concepts/appbinding.md @@ -29,7 +29,7 @@ info: An `AppBinding` is a Kubernetes `CustomResourceDefinition`(CRD) which points to an application using either its URL (usually for a non-Kubernetes resident service instance) or a Kubernetes service object (if self-hosted in a Kubernetes cluster), some optional parameters and a credential secret. To learn more about AppBinding and the problems it solves, please read this blog post: [The case for AppBinding](https://appscode.com/blog/post/the-case-for-appbinding). -If you deploy a database using [KubeDB](https://kubedb.com/docs/0.11.0/concepts/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. +If you deploy a database using [KubeDB](https://kubedb.com/docs/latest/welcome/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. KubeDB uses [Stash](https://appscode.com/products/stash/) to perform backup/recovery of databases. Stash needs to know how to connect with a target database and the credentials necessary to access it. This is done via an `AppBinding`. diff --git a/content/docs/v2024.9.30/guides/percona-xtradb/concepts/appbinding/index.md b/content/docs/v2024.9.30/guides/percona-xtradb/concepts/appbinding/index.md index 7394258a02..7f078d7e29 100644 --- a/content/docs/v2024.9.30/guides/percona-xtradb/concepts/appbinding/index.md +++ b/content/docs/v2024.9.30/guides/percona-xtradb/concepts/appbinding/index.md @@ -29,7 +29,7 @@ info: An `AppBinding` is a Kubernetes `CustomResourceDefinition`(CRD) which points to an application using either its URL (usually for a non-Kubernetes resident service instance) or a Kubernetes service object (if self-hosted in a Kubernetes cluster), some optional parameters and a credential secret. To learn more about AppBinding and the problems it solves, please read this blog post: [The case for AppBinding](https://appscode.com/blog/post/the-case-for-appbinding). -If you deploy a database using [KubeDB](https://kubedb.com/docs/0.11.0/concepts/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. +If you deploy a database using [KubeDB](https://kubedb.com/docs/latest/welcome/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. KubeDB uses [Stash](https://appscode.com/products/stash/) to perform backup/recovery of databases. Stash needs to know how to connect with a target database and the credentials necessary to access it. This is done via an `AppBinding`. diff --git a/content/docs/v2024.9.30/guides/pgbouncer/concepts/appbinding.md b/content/docs/v2024.9.30/guides/pgbouncer/concepts/appbinding.md index 83d52a9291..a9ac430880 100644 --- a/content/docs/v2024.9.30/guides/pgbouncer/concepts/appbinding.md +++ b/content/docs/v2024.9.30/guides/pgbouncer/concepts/appbinding.md @@ -29,7 +29,7 @@ info: An `AppBinding` is a Kubernetes `CustomResourceDefinition`(CRD) which points to an application using either its URL (usually for a non-Kubernetes resident service instance) or a Kubernetes service object (if self-hosted in a Kubernetes cluster), some optional parameters and a credential secret. To learn more about AppBinding and the problems it solves, please read this blog post: [The case for AppBinding](https://appscode.com/blog/post/the-case-for-appbinding). -If you deploy a database using [KubeDB](https://kubedb.com/docs/0.11.0/concepts/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. +If you deploy a database using [KubeDB](https://kubedb.com/docs/latest/welcome/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. KubeDB uses [Stash](https://appscode.com/products/stash/) to perform backup/recovery of databases. Stash needs to know how to connect with a target database and the credentials necessary to access it. This is done via an `AppBinding`. diff --git a/content/docs/v2024.9.30/guides/pgpool/concepts/appbinding.md b/content/docs/v2024.9.30/guides/pgpool/concepts/appbinding.md index 042d2e23f8..460f595cae 100644 --- a/content/docs/v2024.9.30/guides/pgpool/concepts/appbinding.md +++ b/content/docs/v2024.9.30/guides/pgpool/concepts/appbinding.md @@ -29,7 +29,7 @@ info: An `AppBinding` is a Kubernetes `CustomResourceDefinition`(CRD) which points to an application using either its URL (usually for a non-Kubernetes resident service instance) or a Kubernetes service object (if self-hosted in a Kubernetes cluster), some optional parameters and a credential secret. To learn more about AppBinding and the problems it solves, please read this blog post: [The case for AppBinding](https://appscode.com/blog/post/the-case-for-appbinding). -If you deploy a database using [KubeDB](https://kubedb.com/docs/0.11.0/concepts/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. +If you deploy a database using [KubeDB](https://kubedb.com/docs/latest/welcome/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. KubeDB uses [Stash](https://appscode.com/products/stash/) to perform backup/recovery of databases. Stash needs to know how to connect with a target database and the credentials necessary to access it. This is done via an `AppBinding`. diff --git a/content/docs/v2024.9.30/guides/postgres/concepts/appbinding.md b/content/docs/v2024.9.30/guides/postgres/concepts/appbinding.md index d8b4cbe330..8c57cd1bcd 100644 --- a/content/docs/v2024.9.30/guides/postgres/concepts/appbinding.md +++ b/content/docs/v2024.9.30/guides/postgres/concepts/appbinding.md @@ -29,7 +29,7 @@ info: An `AppBinding` is a Kubernetes `CustomResourceDefinition`(CRD) which points to an application using either its URL (usually for a non-Kubernetes resident service instance) or a Kubernetes service object (if self-hosted in a Kubernetes cluster), some optional parameters and a credential secret. To learn more about AppBinding and the problems it solves, please read this blog post: [The case for AppBinding](https://appscode.com/blog/post/the-case-for-appbinding). -If you deploy a database using [KubeDB](https://kubedb.com/docs/0.11.0/concepts/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. +If you deploy a database using [KubeDB](https://kubedb.com/docs/latest/welcome/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. KubeDB uses [Stash](https://appscode.com/products/stash/) to perform backup/recovery of databases. Stash needs to know how to connect with a target database and the credentials necessary to access it. This is done via an `AppBinding`. diff --git a/content/docs/v2024.9.30/guides/proxysql/concepts/appbinding/index.md b/content/docs/v2024.9.30/guides/proxysql/concepts/appbinding/index.md index 89400cb389..a40540f2f7 100644 --- a/content/docs/v2024.9.30/guides/proxysql/concepts/appbinding/index.md +++ b/content/docs/v2024.9.30/guides/proxysql/concepts/appbinding/index.md @@ -29,7 +29,7 @@ info: An `AppBinding` is a Kubernetes `CustomResourceDefinition`(CRD) which points to an application using either its URL (usually for a non-Kubernetes resident service instance) or a Kubernetes service object (if self-hosted in a Kubernetes cluster), some optional parameters and a credential secret. To learn more about AppBinding and the problems it solves, please read this blog post: [The case for AppBinding](https://appscode.com/blog/post/the-case-for-appbinding). -If you deploy a database using [KubeDB](https://kubedb.com/docs/0.11.0/concepts/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. +If you deploy a database using [KubeDB](https://kubedb.com/docs/latest/welcome/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. KubeDB uses [Stash](https://appscode.com/products/stash/) to perform backup/recovery of databases. Stash needs to know how to connect with a target database and the credentials necessary to access it. This is done via an `AppBinding`. diff --git a/content/docs/v2024.9.30/guides/redis/backup/kubestash/application-level/examples/backupstorage.yaml b/content/docs/v2024.9.30/guides/redis/backup/kubestash/application-level/examples/backupstorage.yaml index d081682cfc..6ab3df02ac 100644 --- a/content/docs/v2024.9.30/guides/redis/backup/kubestash/application-level/examples/backupstorage.yaml +++ b/content/docs/v2024.9.30/guides/redis/backup/kubestash/application-level/examples/backupstorage.yaml @@ -7,7 +7,7 @@ spec: storage: provider: gcs gcs: - bucket: neaj-demo + bucket: kubestash-qa prefix: demo secretName: gcs-secret usagePolicy: diff --git a/content/docs/v2024.9.30/guides/redis/backup/kubestash/application-level/index.md b/content/docs/v2024.9.30/guides/redis/backup/kubestash/application-level/index.md index 93aa939538..3bfd59a5e0 100644 --- a/content/docs/v2024.9.30/guides/redis/backup/kubestash/application-level/index.md +++ b/content/docs/v2024.9.30/guides/redis/backup/kubestash/application-level/index.md @@ -214,7 +214,7 @@ $ kubectl exec -it -n demo sample-redis-0 -c redis -- bash redis@sample-redis-0:/data$ redis-cli 127.0.0.1:6379> set db redis OK -127.0.0.1:6379> set name neaj +127.0.0.1:6379> set name batman OK 127.0.0.1:6379> set key value OK @@ -256,7 +256,7 @@ spec: storage: provider: gcs gcs: - bucket: neaj-demo + bucket: kubestash-qa prefix: demo secretName: gcs-secret usagePolicy: @@ -643,7 +643,7 @@ redis@sample-redis-0:/data$ redis-cli 127.0.0.1:6379> get db "redis" 127.0.0.1:6379> get name -"neaj" +"batman" 127.0.0.1:6379> get key "value" 127.0.0.1:6379> exit diff --git a/content/docs/v2024.9.30/guides/redis/backup/kubestash/auto-backup/examples/backupstorage.yaml b/content/docs/v2024.9.30/guides/redis/backup/kubestash/auto-backup/examples/backupstorage.yaml index d081682cfc..6ab3df02ac 100644 --- a/content/docs/v2024.9.30/guides/redis/backup/kubestash/auto-backup/examples/backupstorage.yaml +++ b/content/docs/v2024.9.30/guides/redis/backup/kubestash/auto-backup/examples/backupstorage.yaml @@ -7,7 +7,7 @@ spec: storage: provider: gcs gcs: - bucket: neaj-demo + bucket: kubestash-qa prefix: demo secretName: gcs-secret usagePolicy: diff --git a/content/docs/v2024.9.30/guides/redis/backup/kubestash/auto-backup/index.md b/content/docs/v2024.9.30/guides/redis/backup/kubestash/auto-backup/index.md index 666c79a52d..3fc763b26b 100644 --- a/content/docs/v2024.9.30/guides/redis/backup/kubestash/auto-backup/index.md +++ b/content/docs/v2024.9.30/guides/redis/backup/kubestash/auto-backup/index.md @@ -86,7 +86,7 @@ spec: storage: provider: gcs gcs: - bucket: neaj-demo + bucket: kubestash-qa prefix: blueprint secretName: gcs-secret usagePolicy: diff --git a/content/docs/v2024.9.30/guides/redis/backup/kubestash/customization/common/gcs-backupstorage.yaml b/content/docs/v2024.9.30/guides/redis/backup/kubestash/customization/common/gcs-backupstorage.yaml index 5535624773..d88cdf24df 100644 --- a/content/docs/v2024.9.30/guides/redis/backup/kubestash/customization/common/gcs-backupstorage.yaml +++ b/content/docs/v2024.9.30/guides/redis/backup/kubestash/customization/common/gcs-backupstorage.yaml @@ -7,7 +7,7 @@ spec: storage: provider: gcs gcs: - bucket: neaj-demo + bucket: kubestash-qa prefix: demo secretName: gcs-secret usagePolicy: diff --git a/content/docs/v2024.9.30/guides/redis/backup/kubestash/customization/common/s3-backupstorage.yaml b/content/docs/v2024.9.30/guides/redis/backup/kubestash/customization/common/s3-backupstorage.yaml index ac6ca82712..7f7a03e6f0 100644 --- a/content/docs/v2024.9.30/guides/redis/backup/kubestash/customization/common/s3-backupstorage.yaml +++ b/content/docs/v2024.9.30/guides/redis/backup/kubestash/customization/common/s3-backupstorage.yaml @@ -7,7 +7,7 @@ spec: storage: provider: s3 s3: - bucket: neaj-new + bucket: kubestash-qa region: ap-south-1 endpoint: ap-south-1.linodeobjects.com secretName: s3-secret diff --git a/content/docs/v2024.9.30/guides/redis/backup/kubestash/logical/examples/backupstorage.yaml b/content/docs/v2024.9.30/guides/redis/backup/kubestash/logical/examples/backupstorage.yaml index 5535624773..d88cdf24df 100644 --- a/content/docs/v2024.9.30/guides/redis/backup/kubestash/logical/examples/backupstorage.yaml +++ b/content/docs/v2024.9.30/guides/redis/backup/kubestash/logical/examples/backupstorage.yaml @@ -7,7 +7,7 @@ spec: storage: provider: gcs gcs: - bucket: neaj-demo + bucket: kubestash-qa prefix: demo secretName: gcs-secret usagePolicy: diff --git a/content/docs/v2024.9.30/guides/redis/backup/kubestash/logical/index.md b/content/docs/v2024.9.30/guides/redis/backup/kubestash/logical/index.md index 74032439f0..6ab962a44b 100644 --- a/content/docs/v2024.9.30/guides/redis/backup/kubestash/logical/index.md +++ b/content/docs/v2024.9.30/guides/redis/backup/kubestash/logical/index.md @@ -249,7 +249,7 @@ redis@redis-cluster-shard0-0:/data$ redis-cli -c OK 127.0.0.1:6379> set db redis OK -127.0.0.1:6379> set name neaj +127.0.0.1:6379> set name batman -> Redirected to slot [5798] located at 10.244.0.48:6379 OK 10.244.0.48:6379> set key value @@ -293,7 +293,7 @@ spec: storage: provider: gcs gcs: - bucket: neaj-demo + bucket: kubestash-qa prefix: demo secretName: gcs-secret usagePolicy: @@ -660,7 +660,7 @@ Once, you have created the `RestoreSession` object, KubeStash will create restor ```bash $ watch kubectl get restoresession -n demo -Every 2.0s: kubectl get restoresession -n demo neaj-desktop: Wed Sep 18 15:53:42 2024 +Every 2.0s: kubectl get restoresession -n demo batman-desktop: Wed Sep 18 15:53:42 2024 NAME REPOSITORY FAILURE-POLICY PHASE DURATION AGE redis-cluster-restore gcs-redis-repo Succeeded 1m26s 4m49s ``` @@ -703,7 +703,7 @@ OK "redis" 127.0.0.1:6379> get name -> Redirected to slot [5798] located at 10.244.0.66:6379 -"neaj" +"batman" 10.244.0.66:6379> get key -> Redirected to slot [12539] located at 10.244.0.70:6379 "value" diff --git a/content/docs/v2024.9.30/guides/redis/concepts/appbinding.md b/content/docs/v2024.9.30/guides/redis/concepts/appbinding.md index 23a27a468b..7b5e104203 100644 --- a/content/docs/v2024.9.30/guides/redis/concepts/appbinding.md +++ b/content/docs/v2024.9.30/guides/redis/concepts/appbinding.md @@ -29,7 +29,7 @@ info: An `AppBinding` is a Kubernetes `CustomResourceDefinition`(CRD) which points to an application using either its URL (usually for a non-Kubernetes resident service instance) or a Kubernetes service object (if self-hosted in a Kubernetes cluster), some optional parameters and a credential secret. To learn more about AppBinding and the problems it solves, please read this blog post: [The case for AppBinding](https://appscode.com/blog/post/the-case-for-appbinding). -If you deploy a database using [KubeDB](https://kubedb.com/docs/0.11.0/concepts/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. +If you deploy a database using [KubeDB](https://kubedb.com/docs/latest/welcome/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. KubeDB uses [Stash](https://appscode.com/products/stash/) to perform backup/recovery of databases. Stash needs to know how to connect with a target database and the credentials necessary to access it. This is done via an `AppBinding`. diff --git a/content/docs/v2024.9.30/guides/solr/concepts/appbinding.md b/content/docs/v2024.9.30/guides/solr/concepts/appbinding.md index 36139e64a3..a4d0031536 100644 --- a/content/docs/v2024.9.30/guides/solr/concepts/appbinding.md +++ b/content/docs/v2024.9.30/guides/solr/concepts/appbinding.md @@ -29,7 +29,7 @@ info: An `AppBinding` is a Kubernetes `CustomResourceDefinition`(CRD) which points to an application using either its URL (usually for a non-Kubernetes resident service instance) or a Kubernetes service object (if self-hosted in a Kubernetes cluster), some optional parameters and a credential secret. To learn more about AppBinding and the problems it solves, please read this blog post: [The case for AppBinding](https://appscode.com/blog/post/the-case-for-appbinding). -If you deploy a database using [KubeDB](https://kubedb.com/docs/0.11.0/concepts/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. +If you deploy a database using [KubeDB](https://kubedb.com/docs/latest/welcome/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. KubeDB uses [Stash](https://appscode.com/products/stash/) to perform backup/recovery of databases. Stash needs to know how to connect with a target database and the credentials necessary to access it. This is done via an `AppBinding`. diff --git a/content/docs/v2024.9.30/guides/zookeeper/concepts/appbinding.md b/content/docs/v2024.9.30/guides/zookeeper/concepts/appbinding.md index dea542bc50..f21d9c4685 100644 --- a/content/docs/v2024.9.30/guides/zookeeper/concepts/appbinding.md +++ b/content/docs/v2024.9.30/guides/zookeeper/concepts/appbinding.md @@ -29,7 +29,7 @@ info: An `AppBinding` is a Kubernetes `CustomResourceDefinition`(CRD) which points to an application using either its URL (usually for a non-Kubernetes resident service instance) or a Kubernetes service object (if self-hosted in a Kubernetes cluster), some optional parameters and a credential secret. To learn more about AppBinding and the problems it solves, please read this blog post: [The case for AppBinding](https://appscode.com/blog/post/the-case-for-appbinding). -If you deploy a database using [KubeDB](https://kubedb.com/docs/0.11.0/concepts/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. +If you deploy a database using [KubeDB](https://kubedb.com/docs/latest/welcome/), `AppBinding` object will be created automatically for it. Otherwise, you have to create an `AppBinding` object manually pointing to your desired database. KubeDB uses [Stash](https://appscode.com/products/stash/) to perform backup/recovery of databases. Stash needs to know how to connect with a target database and the credentials necessary to access it. This is done via an `AppBinding`. diff --git a/content/docs/v2024.9.30/overview/README.md b/content/docs/v2024.9.30/overview/README.md index a2426de9fe..de7d4f2889 100644 --- a/content/docs/v2024.9.30/overview/README.md +++ b/content/docs/v2024.9.30/overview/README.md @@ -30,6 +30,6 @@ Kubernetes has emerged as the de-facto way to deploy modern containerized apps o However, many developers want to treat data infrastructure the same as application stacks. Operators want to use the same tools for databases and applications and get the same benefits as the application layer in the data layer: rapid spin-up and repeatability across environments. This is where KubeDB by AppsCode comes as a solution. -KubeDB by AppsCode is a production-grade cloud-native database management solution for Kubernetes. KubeDB simplifies and automates routine database tasks such as provisioning, patching, backup, recovery, failure detection, and repair for various popular databases on private and public clouds. It frees you to focus on your applications so you can give them the fast performance, high availability, security and compatibility they need. +KubeDB by AppsCode is a production-grade cloud-native database management solution for Kubernetes. KubeDB simplifies and automates routine database tasks such as provisioning, upgrading, patching, scaling, volume expansion, backup, recovery, failure detection, and repair for various popular databases on private and public clouds. It frees you to focus on your applications so you can give them the fast performance, high availability, security and compatibility they need. -KubeDB provides you with many familiar database engines to choose from, including **PostgreSQL**, **MySQL**, **MongoDB**, **Elasticsearch**, **Redis**, **Memcached**, and **Percona XtraDB**. KubeDB’s native integration with Kubernetes makes a unique solution compared to competitive solutions from cloud providers and database vendors. +KubeDB provides you with many familiar database engines to choose from, including **PostgreSQL**, **MySQL**, **MongoDB**, **MariaDB**, **Microsoft SQL Server**, **Elasticsearch**, **OpenSearch**, **Redis**, **Memcached**, **Percona XtraDB**, **Druid**, **FerretDB**, **Kafka**, **PgBouncer**, **Pgpool**, **ProxySQL**, **RabbitMQ**, **SingleStore**, **Solr** and **ZooKeeper** . KubeDB’s native integration with Kubernetes makes a unique solution compared to competitive solutions from cloud providers and database vendors. diff --git a/content/docs/v2024.9.30/setup/upgrade/index.md b/content/docs/v2024.9.30/setup/upgrade/index.md index 86d7659143..e3b2c0a89b 100644 --- a/content/docs/v2024.9.30/setup/upgrade/index.md +++ b/content/docs/v2024.9.30/setup/upgrade/index.md @@ -35,7 +35,7 @@ In order to upgrade from KubeDB `v2021.xx.xx` to `{{< param "info.version" >}}`, #### 1. Update KubeDB Catalog CRDs -Helm [does not upgrade the CRDs](https://github.com/helm/helm/issues/6581) bundled in a Helm chart if the CRDs already exist. So, to upgrde the KubeDB catalog CRD, please run the command below: +Helm [does not upgrade the CRDs](https://github.com/helm/helm/issues/6581) bundled in a Helm chart if the CRDs already exist. So, to upgrade the KubeDB catalog CRD, please run the command below: ```bash kubectl apply -f https://github.com/kubedb/installer/raw/{{< param "info.version" >}}/crds/kubedb-catalog-crds.yaml